[zfs-discuss] Zpools on USB

2009-03-13 Thread Stefan Olsson
IMPORTANT: This message is private and confidential. If you have received this 
message in error, please notify us and remove it from your system.
 

Hello,

I have two USB-drives connected to my PC with an zpool on each, one
called TANK, the other IOMEGA. After some problems this morning I
managed to get the IOMEGA-pool to work but have less luck with the
TANK-pool. -When I run "zpool import" and would expect to get some state
of "TANK" I instead get 
"  pool: IOMEGA
id: 9922963935057378355
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported
using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

IOMEGA  FAULTED  corrupted data
  c4t0d0ONLINE"
---
When running an zpool status I get this:
"  pool: IOMEGA
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
IOMEGA  ONLINE   0 0 0
  c8t0d0ONLINE   0 0 0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c6t0d0s0  ONLINE   0 0 0
c6t2d0s0  ONLINE   0 0 0"

In other words the actual IOMEGA-pool appears on a drive which is c8t0d0
and the pool is marked as OK, but then the USB-drive on C4t0d0 appears
to have an zpool called IOMEGA as well although it really contains the
TANK-pool! 
What really worries me is that zfs for some reason has started to treat
a drive which belonged to one pool as if it was belonging to another
pool. Could this happen with other non-USB drives in other configuration
scenarios such as mirrors or raidz? 

I suppose anything can happen on Friday the 13th... 

Cheers,

   Stefan Olsson
  


Xtratherm Limited is a limited company registered in Ireland. Registered 
number: 331130. Registered office: Kells Road, Navan, Co. Meath. Directors: 
D.E. Hynes, E.J. Hynes, S.K. Steenson, J. Keegan, B. Rafferty, T. Hynes. VAT 
Registration: IE6351130B
Xtratherm UK Limited is a limited company registered in England and Wales. 
Registered number: 4404208. Registered office: Park Road, Holmewood Industrial 
Park, Chesterfield, Derbyshire, S42 5UY. VAT Registration: GB787574856
Please note that [Xtratherm Limited/Xtratherm UK Limited] may monitor e-mail 
traffic data and content of e-mail for the purpose of security and training.
This message (and any associated files) is intended only for the use of 
zfs-discuss@opensolaris.org and may contain information that is confidential, 
subject to copyright or constitutes a trade secret. If you are not 
zfs-discuss@opensolaris.org you are hereby notified that any dissemination, 
copying or distribution of this message, or files associated with this message, 
is strictly prohibited. If you have received this message in error, please 
notify us immediately by replying to the message and deleting it from your 
computer. Any views or opinions presented are solely those of the author 
stefan.ols...@xtratherm.com and do not necessarily represent those of the 
company.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Reuse drive that contained zpool

2009-03-13 Thread Stefan Olsson
IMPORTANT: This message is private and confidential. If you have received this 
message in error, please notify us and remove it from your system.
 

Hello,

What should I do with a drive that previously contained an zpool which
is now corrupted if I want to use it again, i.e. create an entirely new
zpool in the confidence that the old zpool will never show its ugly face
again? If I just do a format I reckon that traces of the old zpool is
still there, is there any tool that will permanently destroy an zpool?


Kind Regards,

  Stefan Olsson
  


Xtratherm Limited is a limited company registered in Ireland. Registered 
number: 331130. Registered office: Kells Road, Navan, Co. Meath. Directors: 
D.E. Hynes, E.J. Hynes, S.K. Steenson, J. Keegan, B. Rafferty, T. Hynes. VAT 
Registration: IE6351130B
Xtratherm UK Limited is a limited company registered in England and Wales. 
Registered number: 4404208. Registered office: Park Road, Holmewood Industrial 
Park, Chesterfield, Derbyshire, S42 5UY. VAT Registration: GB787574856
Please note that [Xtratherm Limited/Xtratherm UK Limited] may monitor e-mail 
traffic data and content of e-mail for the purpose of security and training.
This message (and any associated files) is intended only for the use of 
zfs-discuss@opensolaris.org and may contain information that is confidential, 
subject to copyright or constitutes a trade secret. If you are not 
zfs-discuss@opensolaris.org you are hereby notified that any dissemination, 
copying or distribution of this message, or files associated with this message, 
is strictly prohibited. If you have received this message in error, please 
notify us immediately by replying to the message and deleting it from your 
computer. Any views or opinions presented are solely those of the author 
stefan.ols...@xtratherm.com and do not necessarily represent those of the 
company.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-13 Thread Jim Dunham

On Mar 4, 2009, at 7:04 AM, Jacob Ritorto wrote:

Caution:  I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older.  It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
the iscsi initiator boxes.  Couldn't find resolution through the
various Solaris knowledge bases.  Closest I got was to find out that
there's a problem only in the *Solaris 10* iscsi target code that
incorrectly frobs some counter when it shouldn't, violating the iscsi
target specifications.  The problem is fixed in Nevada/OpenSolaris.

Long story short, I tried OpenSolaris 2008.11 and the iscsi crashes
ceased and things ran smoothly.  Not the solution I was hoping for,
since this was to eventually be a prod box, but then Sun announced
that I could purchase OpenSolaris support, so I was covered.  On OS,
my two big filers have been running really nicely for months and
months now.

Don't try to use Solaris 10 as a filer OS unless you can identify and
resolve the iscsi target issue.


The iSCSI Target Daemon in OpenSolaris 2008.xx, has been back ported  
to Solaris 10 u7.


- Jim


On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson > wrote:



Stephen Nelson-Smith wrote:


Hi,

I recommended a ZFS-based archive solution to a client needing to  
have

a network-based archive of 15TB of data in a remote datacentre.  I
based this on an X2200 + J4400, Solaris 10 + rsync.

This was enthusiastically received, to the extent that the client is
now requesting that their live system (15TB data on cheap SAN and
Linux LVM) be replaced with a ZFS-based system.

The catch is that they're not ready to move their production systems
off Linux - so web, db and app layer will all still be on RHEL 5.



At some point I am sure you will convince them to see the light! ;)


As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, such as a 7210, or
something buillt from a JBOD and a head node that does something
similar.  The 7210 is out of budget - and I'm not quite sure how it
presents its storage - is it NFS/CIFS?


The 7000 series devices can present NFS, CIFS and iSCSI. Looks very  
nice if

you need
a nice Gui / Don't know command line / need nice analytics. I had a  
play

with one the other
day and am hoping to get my mit's on one shortly for testing. I  
would like

to give it a real
gd crack with VMWare for VDI VM's.


 If so, presumably it would be
relatively easy to build something equivalent, but without the
(awesome) interface.



For sure the above gear would be fine for that. If you use standard  
Solaris

10 10/08 you have
NFS and iSCSI ability directly in the OS and also available to be  
supported

via a support contract
if needed. Best bet would probably be NFS for the Linux machines,  
but you

would need
to test in *their* environment with *their* workload.


The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.



If you want fibre channel target then you will need to use  
OpenSolaris or

SXDE I believe. It's
not available in mainstream Solaris yet. I am personally waiting  
till then

when it has been
*well* tested in the bleeding edge community. I have too much data  
to take

big risks with it.


What other options are there, and what advice/experience can you  
share?




I do very similar stuff here with J4500's and T2K's for compliance  
archives,

NFS and iSCSI targets
for Windows machines. Works fine for me. Biggest system is 48TB on  
J4500 for

Veritas Netbackup
DDT staging volumes. Very good throughput indeed. Perfect in fact,  
based on

the large files that
are created in this environment. One of these J4500's can keep 4  
LTO4 drives

in a SL500  saturated with
data on a T5220. (4 streams at ~160 MB/sec)

I think you have pretty much the right idea though. Certainly if  
you use Sun

kit you will be able to deliver
a commercially supported solution for them.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reuse drive that contained zpool

2009-03-13 Thread Darren J Moffat

Stefan Olsson wrote:

IMPORTANT: This message is private and confidential. If you have received this 
message in error, please notify us and remove it from your system.


If you want answers you must not post to a list like this saying that 
the content is private and confidential.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread Blake Irvin
This is really great information, though most of the controllers  
mentioned aren't on the OpenSolaris HCL.  Seems like that should be  
corrected :)


My thanks to the community for their support.

On Mar 12, 2009, at 10:42 PM, "James C. McPherson" > wrote:



On Thu, 12 Mar 2009 22:24:12 -0400
Miles Nordin  wrote:


"wm" == Will Murnane  writes:



* SR = Software RAID IT = Integrate. Target mode. IR mode
is not supported.

   wm> Integrated target mode lets you export some storage attached
   wm> to the host system (through another adapter, presumably) as a
   wm> storage device.  IR mode is almost certainly Internal RAID,
   wm> which that card doesn't have support for.

no, the supermicro page for AOC-USAS-L8i does claim support for all
three, and supermicro has an ``IR driver'' available for download for
Linux and Windows, or at least a link to one.

I'm trying to figure out what's involved in determining and switching
modes, why you'd want to switch them, what cards support which modes,
which solaris drivers support which modes, u.s.w.

The answer may be very simple, like ``the driver supports only IR.
Most cards support IR, and cards that don't support IR won't work.   
IR

can run in single-LUN mode.  Some IR cards support RAID5, others
support only RAID 0, 1, 10.''  Or it could be ``the driver supports
only SR.  The driver is what determines the mode, and it does this by
loading firmware into the card, and the first step in initializing  
the

card is always for the driver to load in a firmware blob.  All
currently-produced cards support SR.''  so...actually, now that I say
it, I guess the answer cannot be very simple.  It's going to have to
be a little complicated.
Anyway, I can guess, too.  I was hoping someone would know for sure
off-hand.



Hi Miles,
the mpt(7D) driver supports that card. mpt(7D) supports both
IT and IR firmware variants. You can find out the specifics
for what RAID volume levels are supported by reading the
raidctl(1M) manpage. I don't think you can switch between IT
and IR firmware, but not having needed to know this before,
I haven't tried it.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcphttp://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB

2009-03-13 Thread Richard Elling

comment below...

Stefan Olsson wrote:

IMPORTANT: This message is private and confidential. If you have received this 
message in error, please notify us and remove it from your system.
  


please notify your lawyers that this message is now on the internet and
publically archived forever :-)

 


Hello,

I have two USB-drives connected to my PC with an zpool on each, one
called TANK, the other IOMEGA. After some problems this morning I
managed to get the IOMEGA-pool to work but have less luck with the
TANK-pool. -When I run "zpool import" and would expect to get some state
of "TANK" I instead get 
"  pool: IOMEGA

id: 9922963935057378355
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported
using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

IOMEGA  FAULTED  corrupted data
  c4t0d0ONLINE"
---
When running an zpool status I get this:
"  pool: IOMEGA
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
IOMEGA  ONLINE   0 0 0
  c8t0d0ONLINE   0 0 0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c6t0d0s0  ONLINE   0 0 0
c6t2d0s0  ONLINE   0 0 0"

In other words the actual IOMEGA-pool appears on a drive which is c8t0d0
and the pool is marked as OK, but then the USB-drive on C4t0d0 appears
to have an zpool called IOMEGA as well although it really contains the
TANK-pool! 
  


ZFS maintains a cache of what pools were imported so that at boot time,
it will automatically try to re-import the pool.  The file is 
/etc/zfs/zpool.cache

and you can view its contents by using "zdb -C"

If the current state of affairs does not match the cache, then you can
export the pool, which will clear its entry in the cache.  Then retry the
import.
-- richard


What really worries me is that zfs for some reason has started to treat
a drive which belonged to one pool as if it was belonging to another
pool. Could this happen with other non-USB drives in other configuration
scenarios such as mirrors or raidz? 

I suppose anything can happen on Friday the 13th... 


Cheers,

   Stefan Olsson
  
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread Miles Nordin
> "jcm" == James C McPherson  writes:

   jcm> the mpt(7D) driver supports that card.

Then I am apparently stuck with a closed-source driver again, and
again by surprise.  I bought it because I thought you said 1068E was
supported by mega_sas:

>> http://www.osnews.com/thread?317113

   jcm> The driver for LSI's MegaRAID SAS card is "mega_sas" which was
   jcm> integrated into snv_88. It's planned for backporting to a
   jcm> Solaris 10 update.

but apparently it was wishful thinking on my part:

   jcm> There are several LSI cards which use the 1068 and 1068E chip.
   jcm> Some of these use mpt(7d), some use mega_sas(7d). It all
   jcm> depends on the firmware of the card, basically. You could also
   jcm> have a look at the PCI IDs database at
   jcm> http://pciids.sourceforge.net to see what the name to pci
   jcm> vid/did mapping is. That provides a fairly good indicator of
   jcm> whether you'll need mpt(7d) or mega_sas(7d).

I downloaded the pci.ids file from sourceforge, but I do not
understand how to tell which cards have the proprietary closed-source
driver, even if I somehow know the PCI ID of the card before I buy it?

(pci.ids lists only chip number in the description, and (a) LSI does
not disclose the chip-to-card model association, and apparently the
same chip can have different drivers so chip number isn't enough).  

Is there some file in OpenSolaris against which I can cross-reference
this?  or...really, just use instead of pci.ids, since only the PCI ID
not the description is enough to uniquely identify the card.

Is it possible to change the firmware and thus the driver binding
without changing the PCI ID so that I have to worry about
manufacturers doing that, or can I really count on the PCI ID alone to
tell me which driver will run the card?  I'm worried that for example
adding an iButton to unlock RAID5 on a supermicro card will change the
driver attachment.

Also does anyone know which cards work on SPARC?  None?  All?  I know
the SATA framework is x86 only, but AIUI none of the LSI cards
actually use the SATA framework (so, presumably things like
hd/smartmontools will not work, but at least the card has got a better
shot of working on SPARC).  I cannot test the AOC-USAS-L8i card I have
in SPARC because it is PCIe, and I have only a PCI-X SPARC.

Does anyone have an affordable card which is working well with the
open-source mega_sas driver?  It seems we are still without any
open-source SATA card that works well.


pgpSCd24rqLXT.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread Miles Nordin
> "c" == Miles Nordin  writes:

 c> Is there some file in OpenSolaris against which I can
 c> cross-reference this?  or...really, just use instead of
 c> pci.ids, since only the PCI ID not the description is enough

I found these:

 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/os/driver_aliases
 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/sparc/os/driver_aliases

but they don't seem to be complete.  mega_sas is not listed in either.


pgpCbCV2kLAON.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread Tim
On Fri, Mar 13, 2009 at 3:33 PM, Miles Nordin  wrote:

> > "c" == Miles Nordin  writes:
>
> c> Is there some file in OpenSolaris against which I can
> c> cross-reference this?  or...really, just use instead of
> c> pci.ids, since only the PCI ID not the description is enough
>
> I found these:
>
>
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/os/driver_aliases
>
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/sparc/os/driver_aliases
>
> but they don't seem to be complete.  mega_sas is not listed in either.
>

You can manually create an entry in /etc/driver_aliases to force a driver to
bind to a specific card, regardless of firmware.  Whether it will work or
not isn't apparent.  What is apparent is that it won't be supported by
anyone though :)

ANYWAYS, a bit of research goes a long ways :)

You'll find the various pci device ID's towards the bottom.
http://src.opensolaris.org/source/xref/zfs-crypto/phase2/usr/src/pkgdefs/SUNWmegasas/postinstall?&r=6542

Looks like there's a TON of supported cards.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread Tim
On Fri, Mar 13, 2009 at 3:33 PM, Miles Nordin  wrote:

> > "c" == Miles Nordin  writes:
>
> c> Is there some file in OpenSolaris against which I can
> c> cross-reference this?  or...really, just use instead of
> c> pci.ids, since only the PCI ID not the description is enough
>
> I found these:
>
>
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/os/driver_aliases
>
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/sparc/os/driver_aliases
>
> but they don't seem to be complete.  mega_sas is not listed in either.
>
>
Oh, and for people too lazy to google, here's a start:
http://pci-ids.ucw.cz/read/PC/1000/0060
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread Miles Nordin
> "t" == Tim   writes:

 t> 
http://src.opensolaris.org/source/xref/zfs-crypto/phase2/usr/src/pkgdefs/SUNWmegasas/postinstall?&r=6542

thanks.

 t> Looks like there's a TON of supported cards.

They are all 1078 cards though.  James mentioned mega_sas supports
some 1068E cards depending on the firmware?

The 1078 seem to have rather large built-in write caches and thus be
expensive and unnecessary for people who want to do raidz/slog.  For
example a lot of the cardsd are PERC with 4 ports for $500---costs
more than the drives you plug it into.  Are there any cheap cards for
mega_sas?

any word on which cards work on SPARC?

 t> http://pci-ids.ucw.cz/read/PC/1000/0060

or else the whole list at pciids.sourceforge.net in the mail i quoted


pgpppwfuNOQQF.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread James C. McPherson
On Fri, 13 Mar 2009 18:00:04 -0400
Miles Nordin  wrote:

> > "t" == Tim   writes:
> 
>  t> 
> http://src.opensolaris.org/source/xref/zfs-crypto/phase2/usr/src/pkgdefs/SUNWmegasas/postinstall?&r=6542
> 
> thanks.
> 
>  t> Looks like there's a TON of supported cards.
> 
> They are all 1078 cards though.  James mentioned mega_sas supports
> some 1068E cards depending on the firmware?

Yes, that's correct. Not very convenient, I'm afraid.
 
> The 1078 seem to have rather large built-in write caches and thus be
> expensive and unnecessary for people who want to do raidz/slog.  For
> example a lot of the cardsd are PERC with 4 ports for $500---costs
> more than the drives you plug it into.  Are there any cheap cards for
> mega_sas?

I don't know, sorry. I've only ever seen Sun and Dell-branded
MegaRAID SAS cards, and I have no idea about pricing.

> any word on which cards work on SPARC?
> 
>  t> http://pci-ids.ucw.cz/read/PC/1000/0060
> 
> or else the whole list at pciids.sourceforge.net in the mail i quoted

We haven't got mega_sas on SPARC at this point either.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread Miles Nordin
> "jcm" == James C McPherson  writes:

   jcm> We haven't got mega_sas on SPARC at this point either.

The card Blake found:

 http://www.provantage.com/lsi-logic-lsi00117~7LSIG03X.htm
 
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3080xr/index.html

any idea if that's got a shot of working on SPARC (with mpt i presume,
or is there a fourth lsi driver)?  or is it impossible to tell the
chip/firmware of 3080X?  or is it not going to happen period without
Forth firmware?


pgp2DuxsyGlCt.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss