Re: [zfs-discuss] COMSTAR ISCSI - configuration export/import

2010-06-29 Thread Bruno Sousa
Hmm...that easy? ;)

Thanks for the tip, i will see if that works out.

Bruno

On 29-6-2010 2:29, Mike Devlin wrote:
> I havnt tried it yet, but supposedly this will backup/restore the
> comstar config:
>
> $ svccfg export -a stmf > ⁠comstar⁠.bak.${DATE}
>
> If you ever need to restore the configuration, you can attach the
> storage and run an import:
>
> $ svccfg import ⁠comstar⁠.bak.${DATE}
>
>
> - Mike
>
> On 6/28/10, bso...@epinfante.com  wrote:
>   
>> Hi all,
>>
>> Having osol b134 exporting a couple of iscsi targets to some hosts,how can
>> the COMSTAR configuration be migrated to other host?
>> I can use the ZFS send/receive to replicate the luns but how can I
>> "replicate" the target,views from serverA to serverB ?
>>
>> Is there any best procedures to follow to accomplish this?
>> Thanks for all your time,
>>
>> Bruno
>>
>> Sent from my HTC
>> --
>> This message has been scanned for viruses and
>> dangerous content by MailScanner, and is
>> believed to be clean.
>>
>>
>> 
>   


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-29 Thread Arne Jansen
Edward Ned Harvey wrote:
> Due to recent experiences, and discussion on this list, my colleague and
> I performed some tests:
> 
> Using solaris 10, fully upgraded.  (zpool 15 is latest, which does not
> have log device removal that was introduced in zpool 19)  In any way
> possible, you lose an unmirrored log device, and the OS will crash, and
> the whole zpool is permanently gone, even after reboots.
> 

I'm a bit confused. I tried hard, but haven't been able to reproduce this
using Sol10U8. I have a mirrored slog device. While putting it
under load doing synchronous file creations, we pulled the power cords
and unplugged the slog devices. After powering on zfs imported the pool,
but prompted to acknowledge the missing slog devices with zpool clear.
After that the pool was accessible again. That's exactly how it should be.

What am I doing wrong here? The system is on a different pool using different
disks.

One peculiarity I noted though: when pulling both slog devices from the running
machine, zpool status reports 1 file error. In my understanding this should
not happen as the file data is written from memory and not from the contents
of the zil. It seems the reported write error from the slog device somehow
lead to a corrupted file.

Thanks,
Arne
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] COMSTAR ISCSI - configuration export/import

2010-06-29 Thread Preston Connors
On Tue, 2010-06-29 at 08:58 +0200, Bruno Sousa wrote:
> Hmm...that easy? ;)
> 
> Thanks for the tip, i will see if that works out.
> 
> Bruno

Be aware of the Important Note in
http://wikis.sun.com/display/OpenSolarisInfo/Backing+Up+and+Restoring+a
+COMSTAR+Configuration regarding Backing Up and Restoring a COMSTAR
Configuration.

Important Note
An existing bug in svccfg export causes data to be lost on export when
the persistent logical unit data (stored in the provider_data_pg_sbd
property group) exceeds 2 Kbytes. Refer to CR 6694511 for more details.
Because the truncation is silent, you cannot determine when data has
been lost on export. Until CR 6694511 is integrated, do not use the
instructions on this page to back up the STMF service.

> On 29-6-2010 2:29, Mike Devlin wrote:
> > I havnt tried it yet, but supposedly this will backup/restore the
> > comstar config:
> >
> > $ svccfg export -a stmf > ⁠comstar⁠.bak.${DATE}
> >
> > If you ever need to restore the configuration, you can attach the
> > storage and run an import:
> >
> > $ svccfg import ⁠comstar⁠.bak.${DATE}
> >
> >
> > - Mike
> >
> > On 6/28/10, bso...@epinfante.com  wrote:
> >   
> >> Hi all,
> >>
> >> Having osol b134 exporting a couple of iscsi targets to some hosts,how can
> >> the COMSTAR configuration be migrated to other host?
> >> I can use the ZFS send/receive to replicate the luns but how can I
> >> "replicate" the target,views from serverA to serverB ?
> >>
> >> Is there any best procedures to follow to accomplish this?
> >> Thanks for all your time,
> >>
> >> Bruno
> >>
> >> Sent from my HTC
> >> --
> >> This message has been scanned for viruses and
> >> dangerous content by MailScanner, and is
> >> believed to be clean.
> >>
> >>
> >> 
> >   
> 
> 


-- 
Thank you,
Preston Connors
Atlantic.Net

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Announce: zfsdump

2010-06-29 Thread Tristram Scott
> 
> would be nice if i could pipe the zfs send stream to
> a split and then
> send of those splitted stream over the
> network to a remote system. it would help sending it
> over to remote
> system quicker. can your tool do that?
> 
> something like this
> 
>s | -> | j
> -> | o   zfs recv
>(local)   l  | -> | i(remote)
> t  | -> | n
>  copy from the fifos to tape(s).
> 

> Asif Iqbal

I did look at doing this, with the intention of allowing simultaneous streams 
to multiple tape drives, but put the idea to one side.   

I thought of providing interleaved streams, but wasn't happy with the idea that 
the whole process would block when one of the pipes stalled.

I also contemplated dividing the stream into several large chunks, but for them 
to run simultaneously that seemed to require several reads of the original dump 
stream.  Besides the expense of this approach,  I am not certain that repeated 
zfs send streams have exactly the same byte content.

I think that probably the best approach would be the interleaved streams.

That said, I am not sure how this would necessarily help with the situation you 
describe.  Isn't the limiting factor going to be the network bandwidth between 
remote machines?  Won't you end up with four streams running at quarter speed?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] COMSTAR ISCSI - configuration export/import

2010-06-29 Thread Bruno Sousa
Ouch...!
Thanks for the head's up, and is there any workaround for this?

Can i for instance :

   1. Create the iscsi block on the primary server with command : zfs
  create -p -s -V 10G vol0/iscsi/LUN_10GB
   2. Use sbdadm to make the lun available with command : sbdadm
  create-lu /dev/zvol/rdsk/vol0/iscsi/LUN_10GB
   3. Add a view to this LUN with the command : stmfadm add-view uuid_of_LUN
   4. use zfs send/receive to replicate this LUN from the primary server
  to the backup server
   5. on the backserver use sbdadm create-lu or import-lu
  /dev/zvol/rdsk/vol0/iscsi/LUN_10GB

Thanks for all the tips.

Bruno

On 29-6-2010 14:10, Preston Connors wrote:
> On Tue, 2010-06-29 at 08:58 +0200, Bruno Sousa wrote:
>   
>> Hmm...that easy? ;)
>>
>> Thanks for the tip, i will see if that works out.
>>
>> Bruno
>> 
> Be aware of the Important Note in
> http://wikis.sun.com/display/OpenSolarisInfo/Backing+Up+and+Restoring+a
> +COMSTAR+Configuration regarding Backing Up and Restoring a COMSTAR
> Configuration.
>
> Important Note
> An existing bug in svccfg export causes data to be lost on export when
> the persistent logical unit data (stored in the provider_data_pg_sbd
> property group) exceeds 2 Kbytes. Refer to CR 6694511 for more details.
> Because the truncation is silent, you cannot determine when data has
> been lost on export. Until CR 6694511 is integrated, do not use the
> instructions on this page to back up the STMF service.
>
>   
>> On 29-6-2010 2:29, Mike Devlin wrote:
>> 
>>> I havnt tried it yet, but supposedly this will backup/restore the
>>> comstar config:
>>>
>>> $ svccfg export -a stmf > ⁠comstar⁠.bak.${DATE}
>>>
>>> If you ever need to restore the configuration, you can attach the
>>> storage and run an import:
>>>
>>> $ svccfg import ⁠comstar⁠.bak.${DATE}
>>>
>>>
>>> - Mike
>>>
>>> On 6/28/10, bso...@epinfante.com  wrote:
>>>   
>>>   
 Hi all,

 Having osol b134 exporting a couple of iscsi targets to some hosts,how can
 the COMSTAR configuration be migrated to other host?
 I can use the ZFS send/receive to replicate the luns but how can I
 "replicate" the target,views from serverA to serverB ?

 Is there any best procedures to follow to accomplish this?
 Thanks for all your time,

 Bruno

 Sent from my HTC
 --
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.


 
 
>>>   
>>>   
>>
>> 
>
>   


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-29 Thread George
Another related question - 

I have a second enclosure with blank disks which I would like to use to take a 
copy of the existing zpool as a precaution before attempting any fixes. The 
disks in this enclosure are larger than those that the one with a problem.

What would be the best way to do this?

If I were to clone the disks 1:1 would the difference in size cause any 
problems? I also had an idea that I might be able to DD the original disks into 
files on a ZFS on the second enclosure and mount the files but the few results 
I've turned up on the subject seem to say this is a bad idea.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Announce: zfsdump

2010-06-29 Thread Asif Iqbal
On Tue, Jun 29, 2010 at 8:17 AM, Tristram Scott
 wrote:
>>
>> would be nice if i could pipe the zfs send stream to
>> a split and then
>> send of those splitted stream over the
>> network to a remote system. it would help sending it
>> over to remote
>> system quicker. can your tool do that?
>>
>> something like this
>>
>>                            s | -> | j
>> -> | o   zfs recv
>>            (local)       l  | -> | i    (remote)
>>                 t  | -> | n
>>  copy from the fifos to tape(s).
>>
>
>> Asif Iqbal
>
> I did look at doing this, with the intention of allowing simultaneous streams 
> to multiple tape drives, but put the idea to one side.
>
> I thought of providing interleaved streams, but wasn't happy with the idea 
> that the whole process would block when one of the pipes stalled.
>
> I also contemplated dividing the stream into several large chunks, but for 
> them to run simultaneously that seemed to require several reads of the 
> original dump stream.  Besides the expense of this approach,  I am not 
> certain that repeated zfs send streams have exactly the same byte content.
>
> I think that probably the best approach would be the interleaved streams.
>
> That said, I am not sure how this would necessarily help with the situation 
> you describe.  Isn't the limiting factor going to be the network bandwidth 
> between remote machines?  Won't you end up with four streams running at 
> quarter speed?

if, for example, the network pipe is bigger then one unsplitted stream
of zfs send | zfs recv then splitting it to multiple streams should
optimize the network bandwidth, shouldn't it ?


> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] COMSTAR ISCSI - configuration export/import

2010-06-29 Thread Josh Simon

You may wish to look at this thread:
http://opensolaris.org/jive/thread.jspa?threadID=128046&tstart=0 (last 
post):


 start quote from thread 

Hi everybody,

after looking into the current on source code (b134) into
/usr/src/lib/libstmf/common/store.c, I don't think that this bug
does still have an impact on COMSTAR.

The code there chunks the "provider_data_prop" (thats the stuff that
can get longer than 4kb) into several separate properties 
"provider_data_prop-" with 4kb length, when it stores it and

reads it back by the same algorithm.

If this would still be an issue (or ever was), you would not be able
to reboot your machine without loosing your data, since libstmf
will read the data after reboot from libscf again.

"svccfg export -a stmf" will only dump what libstmf has put into
it before.

So as far as I can say this warning in connection to COMSTAR is simply 
wrong and not true anymore.


end quote from thread 

Josh Simon

On 06/29/2010 08:10 AM, Preston Connors wrote:

On Tue, 2010-06-29 at 08:58 +0200, Bruno Sousa wrote:

Hmm...that easy? ;)

Thanks for the tip, i will see if that works out.

Bruno


Be aware of the Important Note in
http://wikis.sun.com/display/OpenSolarisInfo/Backing+Up+and+Restoring+a
+COMSTAR+Configuration regarding Backing Up and Restoring a COMSTAR
Configuration.

Important Note
An existing bug in svccfg export causes data to be lost on export when
the persistent logical unit data (stored in the provider_data_pg_sbd
property group) exceeds 2 Kbytes. Refer to CR 6694511 for more details.
Because the truncation is silent, you cannot determine when data has
been lost on export. Until CR 6694511 is integrated, do not use the
instructions on this page to back up the STMF service.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Announce: zfsdump

2010-06-29 Thread Kyle McDonald
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/28/2010 10:30 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Tristram Scott
>>
>> If you would like to try it out, download the package from:
>> http://www.quantmodels.co.uk/zfsdump/
> 
> I haven't tried this yet, but thank you very much!
> 
> Other people have pointed out bacula is able to handle multiple tapes, and
> individual file restores.  However, the disadvantage of
> bacula/tar/cpio/rsync etc is that they all have to walk the entire
> filesystem searching for things that have changed.

A compromise here might be to feed those tools the output from the new
ZFS diff command (which 'diffs' 2 snapshots.) when it arrives.

That might get somethign close to "the best of both worlds".

 -Kyle

> 
> The advantage of "zfs send" (assuming incremental backups) is that it
> already knows what's changed, and it can generate a continuous datastream
> almost instantly.  Something like 1-2 orders of magnitude faster per
> incremental backup.
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (MingW32)

iQEcBAEBAgAGBQJMKgO2AAoJEEADRM+bKN5wqF0IAJMN1+41+WSEy8qR4QrxFkPc
VgHv976VjY/mf2EujeSLQOwHEzx4bEfAnA7DjehQqim0YXSvo5jIDXwEZYkoCBaU
TsD6RQucks23fJUhsf0XKZNXZkpe7dqxGFXbOVd8so12LoYaB4/ZfZMdaQrhOHX8
CwyjS22YCvgxYTEUXs52RSwBg8Qw/sxjMYNa2D/iJPgZ8qtezNiiJD3bb8b30TRy
0YFHnAaC6V4/iyDvh+NpixPflaLMFmCkSh55zK1rBVHNJ7npUpZEFAKUZOXq/q38
bttGomj5gJSaoI8u8NGqADuh4Bk7JbkqKncXGJ6gxwW0pyIEplI3tS6yCTHgP/w=
=Hhu9
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Announce: zfsdump

2010-06-29 Thread Tristram Scott
> 
> if, for example, the network pipe is bigger then one
> unsplitted stream
> of zfs send | zfs recv then splitting it to multiple
> streams should
> optimize the network bandwidth, shouldn't it ?
> 

Well, I guess so.  But I wonder, what is the bottle neck here.  If it is the 
rate at which zfs send can stream data, there is a good chance that is limited 
by disk read.  If we split it into four pipes, I still think you are going to 
see four quarter rate reads.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilvering onto a spare - degraded because of read and cksum errors

2010-06-29 Thread Cindy Swearingen

Okay, at this point, I would suspect the spare is having a problem
too or some other hardware problem.

Do you have another disk that you can try as a replacement to c6t1d0?
If so, you can try to detach the spare like this:

# zpool detach tank c7t1d0

Then, physically replace c6t1d0. You review the full instructions here:

http://docs.sun.com/app/docs/doc/817-2271/gazgd?l=en&a=view

Another option is if you have another unused disk already connected
to the system you can use it to replace c6t1d0 after detaching the
spare. For example, if c4t1d0 was available:

# zpool replace tank c6t1d0 c4t1d0

Thanks,

Cindy


On 06/28/10 21:51, Donald Murray, P.Eng. wrote:

Thanks Cindy. I'm running 111b at the moment. I ran a scrub last
night, and it still reports the same status.

r...@weyl:~# uname -a
SunOS weyl 5.11 snv_111b i86pc i386 i86pc Solaris
r...@weyl:~# zpool status -x
  pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: scrub completed after 2h40m with 0 errors on Mon Jun 28 01:23:12 2010
config:

NAME   STATE READ WRITE CKSUM
tank   DEGRADED 0 0 0
  mirror   DEGRADED 0 0 0
spare  DEGRADED 1.37M 0 0
  9828443264686839751  UNAVAIL  0 0 0  was
/dev/dsk/c6t1d0s0
  c7t1d0   DEGRADED 0 0 1.37M  too many errors
c9t0d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c7t0d0 ONLINE   0 0 0
c5t1d0 ONLINE   0 0 0
spares
  c7t1d0   INUSE currently in use

errors: No known data errors
r...@weyl:~#



On Mon, Jun 28, 2010 at 14:55, Cindy Swearingen
 wrote:

Hi Donald,

I think this is just a reporting error in the zpool status output,
depending on what Solaris release is.

Thanks,

Cindy

On 06/27/10 15:13, Donald Murray, P.Eng. wrote:

Hi,

I awoke this morning to a panic'd opensolaris zfs box. I rebooted it
and confirmed it would panic each time it tried to import the 'tank'
pool. Once I disconnected half of one of the mirrored disks, the box
booted cleanly and the pool imported without a panic.

Because this box has a hot spare, it began resilvering automatically.
This is the first time I've resilvered to a hot spare, so I'm not sure
whether the output below [1]  is normal.

In particular, I think it's odd that the spare has an equal number of
read and cksum errors. Is this normal? Is my spare a piece of junk,
just like the disk it replaced?


[1]
r...@weyl:~# zpool status tank
 pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas
exist for
   the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
  see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: resilver in progress for 3h42m, 97.34% done, 0h6m to go
config:

   NAME   STATE READ WRITE CKSUM
   tank   DEGRADED 0 0 0
 mirror   DEGRADED 0 0 0
   spare  DEGRADED 1.36M 0 0
 9828443264686839751  UNAVAIL  0 0 0  was
/dev/dsk/c6t1d0s0
 c7t1d0   DEGRADED 0 0 1.36M  too many
errors
   c9t0d0 ONLINE   0 0 0
 mirror   ONLINE   0 0 0
   c7t0d0 ONLINE   0 0 0
   c5t1d0 ONLINE   0 0 0
   spares
 c7t1d0   INUSE currently in use

errors: No known data errors
r...@weyl:~#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import issue after a crash - Followup

2010-06-29 Thread Tom Buskey
> I'm not sure I didn't have dedup enabled.  I might
> have.
> As it happens, the system rebooted and is now in
> single user mode.
> I'm trying another import.  Most services are not
> running which should free ram.
> 
> If it crashes again, I'll try the live CD while I see
> about more RAM.

Success.

I got another machine with 8GB of RAM.  I installed the drives and booted from 
the b134 Live CD.

Then I did a zpool import -f.

2-3 days later, it finished and I was able to transfer my data off the drives.  
Yay!

I did not have dedup on.
At one point, top showed that less then 1GB RAM was free.
At another point, I could no longer SSH into the system so it probably used up 
most of the RAM.  The console was also unresponsive at this point.  At least it 
didn't crash and was able to finish.

One other data point - these are WD 20EARS drives w/o anything done for the 4k 
sectors which made them slower.

The long recovery and RAM needed make me wary about putting too much zpool/RAM 
on a home system.  And the WD 20EARS drives.

These drives are for my Tivo server storage on a Linux box.  I don't care about 
losing a few bits so they're going to to local to the Linux box w/ setup for 
the 4k sectors.

Has Sun done any testing with zpool size/RAM?
I'd guess that they aren't that interested in bitty boxes w/ only 4GB of RAM.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Permanet errors detected in :<0x13>

2010-06-29 Thread Brian Leonard
Hi,

I have a zpool which is currently reporting that the ":<0x13>" file 
is corrupt:

bleon...@opensolaris:~$ pfexec zpool status -xv external
  pool: external
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
externalONLINE   0 0 0
  c0t0d0p0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

:<0x13>

Otherwise, as you can see, the pool is online. As it's unclear to me how to 
restore the ":<0x13>" file, is my only option for correcting this 
error to destroy and recreate the pool?

Thanks,
Brian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Permanet errors detected in :<0x13>

2010-06-29 Thread Cindy Swearingen

Hi Brian,

You might try running a scrub on this pool.

Is this an external USB device?

Thanks,

Cindy

On 06/29/10 09:16, Brian Leonard wrote:

Hi,

I have a zpool which is currently reporting that the ":<0x13>" file 
is corrupt:

bleon...@opensolaris:~$ pfexec zpool status -xv external
  pool: external
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
externalONLINE   0 0 0
  c0t0d0p0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

:<0x13>

Otherwise, as you can see, the pool is online. As it's unclear to me how to restore the 
":<0x13>" file, is my only option for correcting this error to 
destroy and recreate the pool?

Thanks,
Brian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-29 Thread Andrew Jones
Victor,

The 'zpool import -f -F tank' failed at some point last night. The box was 
completely hung this morning; no core dump, no ability to SSH into the box to 
diagnose the problem. I had no choice but to reset, as I had no diagnostic 
ability. I don't know if there would be anything in the logs?

Earlier I ran 'zdb -e -bcsvL tank' in write mode for 36 hours and gave up to 
try something different. Now the zpool import has hung the box.

Should I try zdb again? Any suggestions?

Thanks,
Andrew
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Use of blocksize (-b) during zfs zvol create, poor performance

2010-06-29 Thread Effrem Norwood
Hi All,

I created a zvol with the following options on an X4500 with 16GB of ram:

zfs create -s -b 64K -V 250T tank/bkp

I then enabled dedup and compression and exported it to Windows Server 2008 as 
iSCSI via COMSTAR. There it was formatted with a 64K cluster size which is the 
NTFS default for volumes of this size. IO performance in Windows is so slow 
that my backup jobs are timing out. Previously when I created a much smaller 
volume using a "-b" of 4K and NTFS cluster size of 4K performance was 
excellent. What I would like to know is what to look at to figure out where the 
performance is going, e.g. it's the blocksize or it's COMSTAR etc. CPU 
consumption according to top is very low so compression seems unlikely. The 
L2ARC is 11G and there is 4G + of free memory and no swapping so dedup seems 
unlikely as well. I have looked at the latencytop utility for clues but am not 
familiar enough with the code to conclude anything useful.

Thanks!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Announce: zfsdump

2010-06-29 Thread Tristram Scott

evik wrote:


Reading this list for a while made it clear that zfs send is not a
backup solution, it can be used for cloning the filesystem to a backup
array if you are consuming the stream with zfs receive so you get
notified immediately about errors. Even one bitflip will render the
stream unusable and you will loose all data, not just part of your
backup cause zfs receive will restore the whole filesystem or nothing
at all depending on the correctness of the stream.

You can use par2 or something similar to try to protect the stream
against bit flips but that would require a lot of free storage space
to recover from errors.

e


The all or nothing aspect does make me nervous, but there are things 
which can be done about it.  The first step, I think, is to calculate a 
checksum of the data stream(s).


 -k chkfile.
  Calculates MD5 checksums for  each  tape  and  for  the
  stream  as a whole. These are written to chkfile, or if
  specified as -, then to stdout.

Run the dump stream back through digest -a md5 and verify that it is intact.

Certainly, using an error correcting code could help us out, but at 
additional expense, both computational and storage.


Personally, for disaster recovery purposes, I think that verifying the 
data after writing to tape is good enough.  What I am looking to guard 
against is the unlikely event that I have a hardware (or software) 
failure, or serious human error.  This is okay with the zfs send stream, 
unless, of course, we get a data corruption on the tape.  I think the 
correlation between hardware failure today and tape corruption since 
yesterday / last week when I last backed up must be pretty small.


In the event that I reach for the tape and find it corrupted, I go back 
a week to the previous full dump stream.


Clearly the strength of the backup solution needs to match the value of 
the data, and especially the cost of not having the data.  For our large 
database applications we mirror to a remote location, and use tape 
backup.  But still, I find the ability to restore the zfs filesystem 
with all its snapshots very useful, which is why I choose to work with 
zfs send.


Tristram



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Storage J4400 SATA Interposer Card

2010-06-29 Thread Rod Dines
I have a box of 6 new Sun J4200/J4400 SATA HDD Caddies (250GB SATA drives 
removed) moutings I am selling on ebay.  Do a search on ebay for the listing 
title
"Sun Storage J4200/J4400 Array-SATA Hard Drive Caddy x 6"
or follow this link
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=110553190864#shId
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Forcing resilver?

2010-06-29 Thread Roy Sigurd Karlsbakk
Hi

There was some messup with switching of drives and an unexpected reboot, so I 
suddenly have a drive in my pool that is partly resilvered. zfs status shows 
the pool is fine, but after a scrub, it shows the drive faulted. I've been told 
on #opensolaris that making a new pool on the drive and then destroying that 
pool, and then zpool replace the drive will help, or moving the drive out and 
putting another filesystem on it, then replacing it in the pool, might help. 
But then, is it possible to forcibly resilver a drive without this hassle?

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Permanet errors detected in :<0x13>

2010-06-29 Thread Cindy Swearingen

Hi Brian,

Because the pool is still online and the metadata is redundant, maybe
these errors were caused by a brief hiccup from the USB device's
physical connection. You might try:

# zpool clear external c0t0d0p0

Then, run a scrub:

# zpool scrub external

If the above fails, then please identify the Solaris release and what
events preceded this problem.

Thanks,

Cindy




On 06/29/10 11:15, W Brian Leonard wrote:

Hi Cindy,

The scrub didn't help and yes, this is an external USB device.

Thanks,
Brian

Cindy Swearingen wrote:

Hi Brian,

You might try running a scrub on this pool.

Is this an external USB device?

Thanks,

Cindy

On 06/29/10 09:16, Brian Leonard wrote:

Hi,

I have a zpool which is currently reporting that the 
":<0x13>" file is corrupt:


bleon...@opensolaris:~$ pfexec zpool status -xv external
  pool: external
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
externalONLINE   0 0 0
  c0t0d0p0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

:<0x13>

Otherwise, as you can see, the pool is online. As it's unclear to me 
how to restore the ":<0x13>" file, is my only option for 
correcting this error to destroy and recreate the pool?


Thanks,
Brian



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Permanet errors detected in :<0x13>

2010-06-29 Thread Cindy Swearingen



I reviewed the zpool clear syntax (looking at my own docs) and didn't
remember that a one-device pool probably doesn't need the device
specified. For pools with many devices, you might want to just clear
the errors on a particular device.

USB sticks for pools are problemmatic. It would be good to know what
caused these errors to try to prevent them in the future.

We know that USB devices don't generate/fabricate device IDs so they
are prone to problems when moving/changing/re-inserting but without
more info, its hard to tell what happened.

cs

On 06/29/10 14:13, W Brian Leonard wrote:
Interesting, this time it worked! Does specifying the device to clear 
cause the command to behave differently? I had assumed w/out the device 
specification, the clear would just apply to all devices in the pool 
(which are just the one).


Thanks,
Brian

Cindy Swearingen wrote:

Hi Brian,

Because the pool is still online and the metadata is redundant, maybe
these errors were caused by a brief hiccup from the USB device's
physical connection. You might try:

# zpool clear external c0t0d0p0

Then, run a scrub:

# zpool scrub external

If the above fails, then please identify the Solaris release and what
events preceded this problem.

Thanks,

Cindy




On 06/29/10 11:15, W Brian Leonard wrote:

Hi Cindy,

The scrub didn't help and yes, this is an external USB device.

Thanks,
Brian

Cindy Swearingen wrote:

Hi Brian,

You might try running a scrub on this pool.

Is this an external USB device?

Thanks,

Cindy

On 06/29/10 09:16, Brian Leonard wrote:

Hi,

I have a zpool which is currently reporting that the 
":<0x13>" file is corrupt:


bleon...@opensolaris:~$ pfexec zpool status -xv external
  pool: external
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise 
restore the

entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
externalONLINE   0 0 0
  c0t0d0p0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

:<0x13>

Otherwise, as you can see, the pool is online. As it's unclear to 
me how to restore the ":<0x13>" file, is my only option 
for correcting this error to destroy and recreate the pool?


Thanks,
Brian





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Use of blocksize (-b) during zfs zvol create, poor performance

2010-06-29 Thread Josh Simon
Have you tried creating tank/bkp without the -s option. I believe I read 
somewhere that the -s option can lead to poor performance on larger 
volumes (which doesn't make sense to me). Also are you using a zil/log 
device?


Josh Simon

On 06/29/2010 09:33 AM, Effrem Norwood wrote:

Hi All,

I created a zvol with the following options on an X4500 with 16GB of ram:

zfs create -s -b 64K -V 250T tank/bkp

I then enabled dedup and compression and exported it to Windows Server
2008 as iSCSI via COMSTAR. There it was formatted with a 64K cluster
size which is the NTFS default for volumes of this size. IO performance
in Windows is so slow that my backup jobs are timing out. Previously
when I created a much smaller volume using a “-b” of 4K and NTFS cluster
size of 4K performance was excellent. What I would like to know is what
to look at to figure out where the performance is going, e.g. it’s the
blocksize or it’s COMSTAR etc. CPU consumption according to top is very
low so compression seems unlikely. The L2ARC is 11G and there is 4G + of
free memory and no swapping so dedup seems unlikely as well. I have
looked at the latencytop utility for clues but am not familiar enough
with the code to conclude anything useful.

Thanks!



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-29 Thread Victor Latushkin

On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:

> Victor,
> 
> The 'zpool import -f -F tank' failed at some point last night. The box was 
> completely hung this morning; no core dump, no ability to SSH into the box to 
> diagnose the problem. I had no choice but to reset, as I had no diagnostic 
> ability. I don't know if there would be anything in the logs?

It sounds like it might run out of memory. Is it an option for you to add more 
memory to the box temporarily?

Even if it is an option, it is good to prepare for such outcome and have kmdb 
loaded either at boot time by adding -k to 'kernel$' line in GRUB menu, or by 
loading it from console with 'mdb -K' before attempting import (type ':c' at 
mdb prompt to continue). In case it hangs again, you can press 'F1-A' on the 
keyboard, drop into kmdb and then use '$http://blogs.sun.com/darren/entry/sending_a_break_to_opensolaris

> Earlier I ran 'zdb -e -bcsvL tank' in write mode for 36 hours and gave up to 
> try something different. Now the zpool import has hung the box.

What do you mean be running zdb in write mode? zdb normally is readonly tool. 
Did you change it in some way?

> Should I try zdb again? Any suggestions?

It sounds like zdb is not going to be helpful, as inconsistent dataset 
processing happens only in read-write mode. So you need to try above 
suggestions with more memory and kmdb/nmi.

victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-29 Thread Victor Latushkin

On Jun 29, 2010, at 1:30 AM, George wrote:

> I've attached the output of those commands. The machine is a v20z if that 
> makes any difference.

Stack trace is similar to one bug that I do not recall right now, and it 
indicates that there's likely a corruption in ZFS metadata.

I suggest you to try running 'zdb -bcsv storage2' and show the result.

victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-29 Thread Andrew Jones
> 
> On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
> 
> > Victor,
> > 
> > The 'zpool import -f -F tank' failed at some point
> last night. The box was completely hung this morning;
> no core dump, no ability to SSH into the box to
> diagnose the problem. I had no choice but to reset,
> as I had no diagnostic ability. I don't know if there
> would be anything in the logs?
> 
> It sounds like it might run out of memory. Is it an
> option for you to add more memory to the box
> temporarily?

I'll place the order for more memory or transfer some from another machine. 
Seems quite likely that we did run out of memory.

> 
> Even if it is an option, it is good to prepare for
> such outcome and have kmdb loaded either at boot time
> by adding -k to 'kernel$' line in GRUB menu, or by
> loading it from console with 'mdb -K' before
> attempting import (type ':c' at mdb prompt to
> continue). In case it hangs again, you can press
> 'F1-A' on the keyboard, drop into kmdb and then use
> '$ 
> If you hardware has physical or virtual NMI button,
> you can use that too to drop into kmdb, but you'll
> need to set a kernel variable for that to work:
> 
> http://blogs.sun.com/darren/entry/sending_a_break_to_o
> pensolaris
> 
> > Earlier I ran 'zdb -e -bcsvL tank' in write mode
> for 36 hours and gave up to try something different.
> Now the zpool import has hung the box.
> 
> What do you mean be running zdb in write mode? zdb
> normally is readonly tool. Did you change it in some
> way?

I had read elsewhere that set /zfs/:zfs_recover=/1/ and set aok=/1/ placed zdb 
into some kind of a write/recovery mode. I have set these in /etc/system. Is 
this a bad idea in this case?

> 
> > Should I try zdb again? Any suggestions?
> 
> It sounds like zdb is not going to be helpful, as
> inconsistent dataset processing happens only in
> read-write mode. So you need to try above suggestions
> with more memory and kmdb/nmi.

Will do, thanks!

> 
> victor
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
>
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-29 Thread George
> I suggest you to try running 'zdb -bcsv storage2' and
> show the result.

r...@crypt:/tmp# zdb -bcsv storage2
zdb: can't open storage2: No such device or address

then I tried

r...@crypt:/tmp# zdb -ebcsv storage2
zdb: can't open storage2: File exists

George
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss