Re: [zfs-discuss] NFS: Cannot share a zfs dataset added to a labeled zone

2007-10-29 Thread Glenn Faden
I posted an earlier reply to zones-discuss, but I didn't copy all of the forums 
in the original posting. I'm doing so now. I am also correcting some errors in 
my earlier reply:

Yes, it is possible to share a zfs dataset that has been added to a labeled 
zone. 

Set the mountpoint property of your dataset zone/data to be within the 
restricted zone's root. For example:

   # zfs set mountpoint=/zone/needtoknow/root/zone/data zone/data

Then you should specify, using zonecfg, that the dataset is associated with the 
zone.

   zonecfg:zone-name> add dataset
   zonecfg:zone-name:dataset> set name=zone/data
   zonecfg:zone-name:dataset> end

I previously stated that you didn't need to specify the dataset via zonecfg, if 
the zone is already running. However, in the general case, you should do so. If 
the dataset is mounted before the zone has been booted, zoneadm will fail to 
boot the zone because its file namespace it not empty.

 Then you should be able to share it via NFS, by editing the approriate dfstab 
file in the global zone. In this case, the dfstab file would be:

  /zone/restricted/etc/dfs/dfstab

When the zone is booted,  the dataset will be mounted automatically as a 
read-write 
mount point in the restricted zone with the correct label.

A few subtle points:

1. Setting the zfs mountpoint property has the side-effect of settting 
its label if the mountpoint corresponds to a labeled zone. Only the global zone 
can do this.

2. The dataset will only be accessible while the restricted zone is ready or 
running. Note that it can be shared (via NFS) even when the zone is in the 
ready state.

3. Labeled zones which dominate the restricted zone (if any) can gain read-only 
access via NFS mounts (specifying an non-shared global zone IP address and the 
full pathname of the mounted dataset as viewed from the global zone. For 
example:

/net/gz-name/zone/restricted/root/zone/data

The second "zone" in the pathname is there because it was specified in the 
original posting, but you can rework the example without it.

--Glenn
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs boot on Sparc?

2007-10-29 Thread Mauro Mozzarelli
I am afraid a search does not get much on this subject, hence this post.

Is there a plan for a Nevada build to include zfs root/boot on Sparc 
architecture?
When?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot on Sparc?

2007-10-29 Thread Lori Alt
Mauro Mozzarelli wrote:
> I am afraid a search does not get much on this subject, hence this post.
>
> Is there a plan for a Nevada build to include zfs root/boot on Sparc 
> architecture?
> When?
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

We are aiming to integrate zfs boot for both sparc and x86
into Nevada around the end of this calendar year.

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-29 Thread Rayson Ho
Restarting this thread... I've just finished reading the article, "A
look at MySQL on ZFS":
http://dev.mysql.com/tech-resources/articles/mysql-zfs.html

The section " MySQL Performance Comparison: ZFS vs. UFS on Open
Solaris" looks interesting...

Rayson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot on Sparc?

2007-10-29 Thread Mauro Mozzarelli
Following up, I got this message from Lori:

 
> We are aiming to integrate zfs boot for both sparc and x86
> into Nevada around the end of this calendar year.
> 
> Lori
> 

Lori,

Thank you for your reply, I will be probably one of the first to try it.

Mauro
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot on Sparc?

2007-10-29 Thread Brian Hechinger
On Mon, Oct 29, 2007 at 08:55:21AM -0700, Mauro Mozzarelli wrote:
> Following up, I got this message from Lori:
>  
> > We are aiming to integrate zfs boot for both sparc and x86
> > into Nevada around the end of this calendar year.
> > 
> > Lori
> 
> Lori,
> 
> Thank you for your reply, I will be probably one of the first to try it.

Not if I beat you to it. :)

-brian
-- 
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly."  -- Jonathan 
Patschke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS drive replacement

2007-10-29 Thread Stephen Stogner
Hello,
 Is there a way to replace a standalone drive with a raidz implimentation with 
zfs?


ie zpool replace bigpool c2t1d0 raidz c2t3d0 c2t4d0 c2t5d0 

etc..
thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I/O write failures on non-replicated pool

2007-10-29 Thread Robert Milkowski
Hello Nigel,

Thursday, October 25, 2007, 12:02:04 PM, you wrote:

NS> Nice to see some progress, at last, on this bug:
NS> http://bugs.opensolaris.org/view_bug.do?bug_id=6417779
NS> "ZFS: I/O failure (write on ...) -- need to reallocate writes"

NS> Commit to Fix:   snv_77

NS> http://www.opensolaris.org/os/community/arc/caselog/2007/567/onepager/

NS> http://mail.opensolaris.org/pipermail/onnv-notify/2007-October/012782.html

Thanks for spotting this.
By looking at one-pager it's not obvious what would happen in case of
one top level vdev failuere - will it wait or will it using ditto
block to write data on another device as suggested in bug (however I'm
not sure it's good idea)?


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-29 Thread Rayson Ho
Hi Tony,

John posted the URL to his article to the databases-discuss this
morning, and I only took a very quick look.

May be you can join that list and discuss further regarding the configurations?
http://mail.opensolaris.org/mailman/listinfo/databases-discuss

Rayson



On 10/29/07, Tony Leone <[EMAIL PROTECTED]> wrote:
> This is very interesting because it directly contradicts the results the ZFS 
> developers are posting on the OpenSolaris mailing list.  I just scanned the 
> article, does he give his ZFS settings and is he separate ZIL devices?
>
> Tony Leone
>
> >>> "Rayson Ho" <[EMAIL PROTECTED]> 10/29/2007 11:39 AM >>>
> Restarting this thread... I've just finished reading the article, "A
> look at MySQL on ZFS":
> http://dev.mysql.com/tech-resources/articles/mysql-zfs.html
>
> The section " MySQL Performance Comparison: ZFS vs. UFS on Open
> Solaris" looks interesting...
>
> Rayson
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sequential reading/writting from large stripe faster on SVM than ZFS?

2007-10-29 Thread Robert Milkowski
Hello Roch,

Wednesday, October 24, 2007, 3:49:45 PM, you wrote:

RP> I would suspect the checksum part of this (I do believe it's being
RP> actively worked on) :

RP> 6533726 single-threaded checksum & raidz2 parity
RP> calculations limit write bandwidth on thumper


I guess it's single threaded per pool - that's why once I created
multiple pool the performance was much better.

Thanks for info.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS drive replacement

2007-10-29 Thread Cindy . Swearingen
Hi Stephen,

No, you can't replace a one device with a raidz device, but you can
create a mirror from one device by using zpool attach. See the output
below.

The other choice is to add to an existing raidz configuration. See
the output below.

I thought we had an RFE to expand an existing raidz device but I
can't find it now.

Examples of these operations are included in the ZFS admin guide:

http://docs.sun.com/app/docs/doc/817-2271/gayrd?a=view

Cindy

zpool attach:

# zpool create waldenpond c1t23d0
# zpool attach waldenpond c1t23d0 c1t24d0
# zpool status waldenpond
   pool: waldenpond
  state: ONLINE
  scrub: resilver completed with 0 errors on Mon Oct 29 11:06:14 2007
config:

 NAME STATE READ WRITE CKSUM
 waldenpond   ONLINE   0 0 0
   mirror ONLINE   0 0 0
 c1t23d0  ONLINE   0 0 0
 c1t24d0  ONLINE   0 0 0

zpool add:

# zpool create goldenpond raidz c1t17d0 c1t18d0 c1t19d0
# zpool add goldenpond raidz c1t20d0 c1t21d0 c1t22d0
# zpool status goldenpond
   pool: goldenpond
  state: ONLINE
  scrub: none requested
config:

 NAME STATE READ WRITE CKSUM
 goldenpond   ONLINE   0 0 0
   raidz1 ONLINE   0 0 0
 c1t17d0  ONLINE   0 0 0
 c1t18d0  ONLINE   0 0 0
 c1t19d0  ONLINE   0 0 0
   raidz1 ONLINE   0 0 0
 c1t20d0  ONLINE   0 0 0
 c1t21d0  ONLINE   0 0 0
 c1t22d0  ONLINE   0 0 0

errors: No known data errors


Stephen Stogner wrote:
> Hello,
>  Is there a way to replace a standalone drive with a raidz implimentation 
> with zfs?
> 
> 
> ie zpool replace bigpool c2t1d0 raidz c2t3d0 c2t4d0 c2t5d0 
> 
> etc..
> thanks
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Failure on 3511s

2007-10-29 Thread Stephen Green
We have a pair of 3511s that are host to a couple of ZFS filesystems.
Over the weekend we had a power hit, and when we brought the server that
the 3511s are attached to back up, the ZFS filesystem was hosed.  Are we 
totally out of luck here?  There's nothing here that we can't recover, 
given enough time, but I'd really rather not have to do this.

The machine is a v40z, the 3511s are attached via FC, and uname -a says:

SunOS search 5.10 Generic_118855-33 i86pc i386 i86pc

zpool list says:

NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
files  -   -   -  -  FAULTED-

zpool status -v says:

   pool: files
  state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-CS
  scrub: none requested
config:

 NAME   STATE READ WRITE 
CKSUM
 files  FAULTED  0 0 
 6  corrupted data
   raidz1   ONLINE   0 0 
 6
 c0t600C0FF00923490E9DA84700d0  ONLINE   0 0 
 0
 c0t600C0FF00923494F39349400d0  ONLINE   0 0 
 0
 c0t600C0FF0092349138D7A3C00d0  ONLINE   0 0 
 0
 c0t600C0FF00923495AF4B94F00d0  ONLINE   0 0 
 0
 c0t600C0FF009234972FF459200d0  ONLINE   0 0 
 0


Steve, desperate to get his filesystem back
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool question

2007-10-29 Thread Krzys

hello folks, I am running Solaris 10 U3 and I have small problem that I dont 
know how to fix...

I had a pool of two drives:

bash-3.00# zpool status
   pool: mypool
  state: ONLINE
  scrub: none requested
config:

 NAME  STATE READ WRITE CKSUM
 mypoolONLINE   0 0 0
   emcpower0a  ONLINE   0 0 0
   emcpower1a  ONLINE   0 0 0

errors: No known data errors

I added another drive

so now I have pool of 3 drives

bash-3.00# zpool status
   pool: mypool
  state: ONLINE
  scrub: none requested
config:

 NAME  STATE READ WRITE CKSUM
 mypoolONLINE   0 0 0
   emcpower0a  ONLINE   0 0 0
   emcpower1a  ONLINE   0 0 0
   emcpower2a  ONLINE   0 0 0

errors: No known data errors

everything is great but I've made a mistake and I would like to remove 
emcpower2a from my pool and I cannot do that...

Well the mistake that I made is that I did not format my device correctly so 
instead of adding 125gig I added 128meg

here is my partition on that disk:
partition> print
Current partition table (original):
Total disk cylinders available: 63998 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
   0   rootwm   0 -63  128.00MB(64/0/0)   262144
   1   swapwu  64 -   127  128.00MB(64/0/0)   262144
   2 backupwu   0 - 63997  125.00GB(63998/0/0) 262135808
   3 unassignedwm   00 (0/0/0) 0
   4 unassignedwm   00 (0/0/0) 0
   5 unassignedwm   00 (0/0/0) 0
   6usrwm 128 - 63997  124.75GB(63870/0/0) 261611520
   7 unassignedwm   00 (0/0/0) 0

partition>

what I would like to do is to remove my emcpower2a device, format it and then 
add 125gig one instead of the 128meg. Is it possible to do this in Solaris 10 
U3? If not what are my options?

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Failure on 3511s

2007-10-29 Thread Eric Schrock
What does 'fmdump -eV' show?  You might also want to try the following
and run 'zpool status' in the background:

# dtrace -n 'zfs_ereport_post:entry{stack()}'

This will provide additional information of the source of the ereports
isn't obvious.

- Eric

On Mon, Oct 29, 2007 at 03:44:14PM -0400, Stephen Green wrote:
> We have a pair of 3511s that are host to a couple of ZFS filesystems.
> Over the weekend we had a power hit, and when we brought the server that
> the 3511s are attached to back up, the ZFS filesystem was hosed.  Are we 
> totally out of luck here?  There's nothing here that we can't recover, 
> given enough time, but I'd really rather not have to do this.
> 
> The machine is a v40z, the 3511s are attached via FC, and uname -a says:
> 
> SunOS search 5.10 Generic_118855-33 i86pc i386 i86pc
> 
> zpool list says:
> 
> NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
> files  -   -   -  -  FAULTED-
> 
> zpool status -v says:
> 
>pool: files
>   state: FAULTED
> status: The pool metadata is corrupted and the pool cannot be opened.
> action: Destroy and re-create the pool from a backup source.
> see: http://www.sun.com/msg/ZFS-8000-CS
>   scrub: none requested
> config:
> 
>  NAME   STATE READ WRITE 
> CKSUM
>  files  FAULTED  0 0 
>  6  corrupted data
>raidz1   ONLINE   0 0 
>  6
>  c0t600C0FF00923490E9DA84700d0  ONLINE   0 0 
>  0
>  c0t600C0FF00923494F39349400d0  ONLINE   0 0 
>  0
>  c0t600C0FF0092349138D7A3C00d0  ONLINE   0 0 
>  0
>  c0t600C0FF00923495AF4B94F00d0  ONLINE   0 0 
>  0
>  c0t600C0FF009234972FF459200d0  ONLINE   0 0 
>  0
> 
> 
> Steve, desperate to get his filesystem back
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Ed Saipetch
Hello,

I'm experiencing major checksum errors when using a syba silicon image 3114 
based pci sata controller w/ nonraid firmware.  I've tested by copying data via 
sftp and smb.  With everything I've swapped out, I can't fathom this being a 
hardware problem.  There have been quite a few blog posts out there with people 
having a similar config and not having any problems.

Here's what I've done so far:
1. Changed solaris releases from S10 U3 to NV 75a
2. Switched out motherboards and cpus from AMD sempron to a Celeron D
3. Switched out memory to use completely different dimms
4. Switched out sata drives (2-3 250gb hitachi's and seagates in RAIDZ, 3x400GB 
seagates RAIDZ and 1x250GB hitachi with no raid)

Here's output of a scrub and the status (ignore the date and time, I haven't 
reset it on this new motherboard) and please point me in the right direction if 
I'm barking up the wrong tree.

# zpool scrub tank
# zpool status
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0   293
  c0d1  ONLINE   0 0   293

errors: 140 data errors, use '-v' for a list
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Different Sized Disks Recommendation

2007-10-29 Thread Paul
Hi,

I was first attracted to ZFS (and therefore OpenSolaris) because I thought that 
ZFS allowed the used of different sized disks in raidz pools without wasted 
disk space. Further research has confirmed that this isn't possible--by default.

I have seen a little bit of documentation around using ZFS with slices. I think 
this might be the answer, but I would like to be sure what the compromise is if 
I go down this path.

I am trying to set up a file server for my home office. I would like as much 
storage space as possible, and I need to be able to survive one drive failing.

Here is what I have:

DiskA - 160GB
DiskB - 200GB
DiskC - 250GB
DiskD - 500GB
DiskE - 500GB

(I also have a separate 20GB disk that I will be booting from.)

Here is the best way I can see to maximize storage:
-160GB slice on all disks
--Put these into a raidz1 zpool
-40GB slice on disks B through E
--Put these into a raidz1 zpool
-50GB slice on disks C through E
--Put these into a raidz1 zpool
-250GB slice on disks C and E
--Put these into a raidz1 zpool

Then I put the 4 zpools into a striped zpool.

That [i]should[/i] leave me with 1110GB of storage space (out of the 1610GB of 
hard disk space).

If a hard drive fails, one of more of my raidz1 zpools with become degraded. 
However, I should be able to replace that drive, slice it like the previous 
drive and then I should be back in business.

How much extra work am I going to spend trying to maintain a structure this 
complicated? If a drive fails and I replace it with a drive of a different size 
(bigger or smaller) what kind of hell will I be putting myself through to 
reorganize things? What if I want to add a new disk of a different size? Most 
importantly, am I putting my data at risk by having (and manipulating) these 
extra 'layers'?

How much more difficult will it be to recover if my system completely crashes 
and I need to reinstall the O/S?

I much simpler layout might be Disks A through C in one raidz1 zpool and Disks 
D and E in another raidz1 zpool (there is a good chance I'll buy another 500GB 
disk in the next month). This leaves me with 820GB. It is a lot less efficient, 
but the set up is much simpler to maintain. (It will be a little more difficult 
to use because I'll have two file systems, not one.)

I have attached a spreadsheet that calculates how big each slice should be if I 
want to be as efficient as possible with the storage.

Any comments?

Thanks!
 
 
This message posted from opensolaris.org

zfs-disks.xls
Description: MS-Excel spreadsheet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Neal Pollack
Ed Saipetch wrote:
> Hello,
>
> I'm experiencing major checksum errors when using a syba silicon image 3114 
> based pci sata controller w/ nonraid firmware.  I've tested by copying data 
> via sftp and smb.  With everything I've swapped out, I can't fathom this 
> being a hardware problem.  

I can.  But I suppose it could also be in some unknown way a driver issue.
Even before ZFS, I've had numerous situations where various si3112 and 
3114 chips
would corrupt data on UFS and PCFS, with very simple  copy and checksum
test scripts, doing large bulk transfers.

Si chips are best used to clean coffee grinders.  Go buy a real SATA 
controller.

Neal

> There have been quite a few blog posts out there with people having a similar 
> config and not having any problems.
>
> Here's what I've done so far:
> 1. Changed solaris releases from S10 U3 to NV 75a
> 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
> 3. Switched out memory to use completely different dimms
> 4. Switched out sata drives (2-3 250gb hitachi's and seagates in RAIDZ, 
> 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)
>
> Here's output of a scrub and the status (ignore the date and time, I haven't 
> reset it on this new motherboard) and please point me in the right direction 
> if I'm barking up the wrong tree.
>
> # zpool scrub tank
> # zpool status
>   pool: tank
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
> entire pool from backup.
>see: http://www.sun.com/msg/ZFS-8000-8A
>  scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
> config:
>
> NAMESTATE READ WRITE CKSUM
> tankONLINE   0 0   293
>   c0d1  ONLINE   0 0   293
>
> errors: 140 data errors, use '-v' for a list
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Nathan Kroenert
You have not mentioned if you have swapped the 3114 based HBA itself...?

Have you tried a different HBA? :)

Nathan.

Ed Saipetch wrote:
> Hello,
> 
> I'm experiencing major checksum errors when using a syba silicon image 3114 
> based pci sata controller w/ nonraid firmware.  I've tested by copying data 
> via sftp and smb.  With everything I've swapped out, I can't fathom this 
> being a hardware problem.  There have been quite a few blog posts out there 
> with people having a similar config and not having any problems.
> 
> Here's what I've done so far:
> 1. Changed solaris releases from S10 U3 to NV 75a
> 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
> 3. Switched out memory to use completely different dimms
> 4. Switched out sata drives (2-3 250gb hitachi's and seagates in RAIDZ, 
> 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)
> 
> Here's output of a scrub and the status (ignore the date and time, I haven't 
> reset it on this new motherboard) and please point me in the right direction 
> if I'm barking up the wrong tree.
> 
> # zpool scrub tank
> # zpool status
>   pool: tank
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
> entire pool from backup.
>see: http://www.sun.com/msg/ZFS-8000-8A
>  scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
> config:
> 
> NAMESTATE READ WRITE CKSUM
> tankONLINE   0 0   293
>   c0d1  ONLINE   0 0   293
> 
> errors: 140 data errors, use '-v' for a list
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread MC
> Here's what I've done so far:

The obvious thing to test is the drive controller, so maybe you should do that 
:)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Edward Saipetch
Neal Pollack wrote:
> Ed Saipetch wrote:
>> Hello,
>>
>> I'm experiencing major checksum errors when using a syba silicon  
>> image 3114 based pci sata controller w/ nonraid firmware.  I've  
>> tested by copying data via sftp and smb.  With everything I've  
>> swapped out, I can't fathom this being a hardware problem.
>
> I can.  But I suppose it could also be in some unknown way a driver  
> issue.
> Even before ZFS, I've had numerous situations where various si3112  
> and 3114 chips
> would corrupt data on UFS and PCFS, with very simple  copy and  
> checksum
> test scripts, doing large bulk transfers.
>
> Si chips are best used to clean coffee grinders.  Go buy a real SATA  
> controller.
>
> Neal

I have no problem ponying up money for a better SATA controller.  I saw
a bunch of blog posts that people were successful using the card so I
thought maybe I had a bad card with corrupt firmware nvram.  Is it worth
trying to trace down the bug?  If this type of corruption exists, nobody
should be using this card.  As a side note, what SATA cards are people
having luck with?

>
>> There have been quite a few blog posts out there with people having  
>> a similar config and not having any problems.
>>
>> Here's what I've done so far:
>> 1. Changed solaris releases from S10 U3 to NV 75a
>> 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
>> 3. Switched out memory to use completely different dimms
>> 4. Switched out sata drives (2-3 250gb hitachi's and seagates in  
>> RAIDZ, 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)
>>
>> Here's output of a scrub and the status (ignore the date and time,  
>> I haven't reset it on this new motherboard) and please point me in  
>> the right direction if I'm barking up the wrong tree.
>>
>> # zpool scrub tank
>> # zpool status
>>  pool: tank
>> state: ONLINE
>> status: One or more devices has experienced an error resulting in  
>> data
>>corruption.  Applications may be affected.
>> action: Restore the file in question if possible.  Otherwise  
>> restore the
>>entire pool from backup.
>>   see: http://www.sun.com/msg/ZFS-8000-8A
>> scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
>> config:
>>
>>NAMESTATE READ WRITE CKSUM
>>tankONLINE   0 0   293
>>  c0d1  ONLINE   0 0   293
>>
>> errors: 140 data errors, use '-v' for a list
>>  This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Different Sized Disks Recommendation

2007-10-29 Thread MC
ZFS "copies" attribute could be used to make this easy, but with all the talk 
of kernel panics on drive loss and non-guaranteed block placement across 
different disks, I don't like ZFS copies.  (see threads like 
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-October/043279.html )

The bottom line in my opinion:  Yes what you are talking about is a big hassle 
to create, and yes it is a big hassle to maintain in the future.  Instead I'd 
buy another cheap ($99) 500GB disk and RAID5/RAIDZ them together for a clean 
and easy 1000GB of space.  The end!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Will Murnane
On 10/30/07, Edward Saipetch <[EMAIL PROTECTED]> wrote:
> As a side note, what SATA cards are people having luck with?
Running b74, I'm very happy with the Marvell mv88sx6081-based Supermicro card:
http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009&Tpk=aoc-sat2
http://www.wiredzone.com/xq/asp/ic.10016527/qx/itemdesc.htm
It hypothetically supports port multipliers, but I haven't tested this myself.

On earlier releases (b69, specifically) I had problems with disks
occasionally disappearing.  Those appear to have been completely
resolved; the box has most recently been up for 16 days with no
errors.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread James C. McPherson
Will Murnane wrote:
> On 10/30/07, Edward Saipetch <[EMAIL PROTECTED]> wrote:
>> As a side note, what SATA cards are people having luck with?
> Running b74, I'm very happy with the Marvell mv88sx6081-based Supermicro card:
> http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
> http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009&Tpk=aoc-sat2
> http://www.wiredzone.com/xq/asp/ic.10016527/qx/itemdesc.htm
> It hypothetically supports port multipliers, but I haven't tested this myself.
> 
> On earlier releases (b69, specifically) I had problems with disks
> occasionally disappearing.  Those appear to have been completely
> resolved; the box has most recently been up for 16 days with no
> errors.

We don't currently have support for SATA port multipliers in
Solaris or OpenSolaris. I know this because people in my team
are working on it (no ETA as yet) and we discussed it last week.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Neal Pollack
Edward Saipetch wrote:
> Neal Pollack wrote:
>> Ed Saipetch wrote:
>>> Hello,
>>>
>>> I'm experiencing major checksum errors when using a syba silicon 
>>> image 3114 based pci sata controller w/ nonraid firmware.  I've 
>>> tested by copying data via sftp and smb.  With everything I've 
>>> swapped out, I can't fathom this being a hardware problem.  
>>
>> I can.  But I suppose it could also be in some unknown way a driver 
>> issue.
>> Even before ZFS, I've had numerous situations where various si3112 
>> and 3114 chips
>> would corrupt data on UFS and PCFS, with very simple  copy and checksum
>> test scripts, doing large bulk transfers.
>>
>> Si chips are best used to clean coffee grinders.  Go buy a real SATA 
>> controller.
>>
>> Neal
> I have no problem ponying up money for a better SATA controller.  I 
> saw a bunch of blog posts that people were successful using the card 
> so I thought maybe I had a bad card with corrupt firmware nvram.  Is 
> it worth trying to trace down the bug?

Of course it is.  File a bug so someone on the SATA team can study it.

> If this type of corruption exists, nobody should be using this card.  
> As a side note, what SATA cards are people having luck with?

A lot of people are happy with the 8 port PCI SATA card made by 
SuperMicro that has the Marvell chip on it.
Don't buy other marvell cards on ebay, because Marvell dumped a ton of 
cards that ended up with an earlier
rev of the silicon that can corrupt data.  But all the cards made by 
SuperMicro and sold by them have the c rev
or later silicon and work great.

That said, I wish someone would investigate the Silicon Image issues, 
but there are only so many engineers,
with so little time.
>>
>>> There have been quite a few blog posts out there with people having 
>>> a similar config and not having any problems.
>>>
>>> Here's what I've done so far:
>>> 1. Changed solaris releases from S10 U3 to NV 75a
>>> 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
>>> 3. Switched out memory to use completely different dimms
>>> 4. Switched out sata drives (2-3 250gb hitachi's and seagates in 
>>> RAIDZ, 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)
>>>
>>> Here's output of a scrub and the status (ignore the date and time, I 
>>> haven't reset it on this new motherboard) and please point me in the 
>>> right direction if I'm barking up the wrong tree.
>>>
>>> # zpool scrub tank
>>> # zpool status
>>>   pool: tank
>>>  state: ONLINE
>>> status: One or more devices has experienced an error resulting in data
>>> corruption.  Applications may be affected.
>>> action: Restore the file in question if possible.  Otherwise restore 
>>> the
>>> entire pool from backup.
>>>see: http://www.sun.com/msg/ZFS-8000-8A
>>>  scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
>>> config:
>>>
>>> NAMESTATE READ WRITE CKSUM
>>> tankONLINE   0 0   293
>>>   c0d1  ONLINE   0 0   293
>>>
>>> errors: 140 data errors, use '-v' for a list
>>>  
>>>  
>>> This message posted from opensolaris.org
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>   
>>
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss