Re: [zfs-discuss] strange zfs recieve behavior

2007-10-15 Thread Edward Pilatowicz
On Sun, Oct 14, 2007 at 09:37:42PM -0700, Matthew Ahrens wrote:
> Edward Pilatowicz wrote:
> >hey all,
> >so i'm trying to mirror the contents of one zpool to another
> >using zfs send / recieve while maintaining all snapshots and clones.
>
> You will enjoy the upcoming "zfs send -R" feature, which will make your
> script unnecessary.
>

sweet.  while working on it i realized that this just really needed to
be built in functionality.  :)

i assume that this wil allow for backups by doing "zfs snap -r" and
"zfs send -R" one day, then sometime later doing the same thing and
just sending the deltas for every filesystem?  will this also include
any other random snapshots that were created in between when the two
"zfs send -R" commands are run?  (not just snapshots that were used
for clones.)

is there an bugid/psarc case number?

> >[EMAIL PROTECTED] zfs send -i 070221 export/ws/[EMAIL PROTECTED] | zfs 
> >receive -v -d
> >export2
> >receiving incremental stream of export/ws/[EMAIL PROTECTED] into
> >export2/ws/[EMAIL PROTECTED]
> >cannot receive: destination has been modified since most recent snapshot
>
> You may be hitting 6343779 "ZPL's delete queue causes 'zfs restore' to
> fail". To work around it, use "zfs recv -F".
>
> --matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS recovery tool, intrested?

2007-10-15 Thread Mario Goebbels
> Having my 700Gb one disk ZFS crashing on me created ample need for a recovery 
> tool. 
> 
> So I spent the weekend creating a tool that lets you list directories and 
> copy files from any pool on a one disk ZFS filesystem, where for example the 
> Solaris kernel keeps panicing.
> 
> Is there any interest in it being release to the public?

Would be nice to have.

Still, it calls for a recovery mode in ZFS, where all available disks of
the pool are mounted, unrecoverable errors will not cause panics and
missing data is filled with zeroes. Bonus would be a special cp command,
that works with ZFS and tells me whether a file copied correctly or not.

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on EMC Symmetrix

2007-10-15 Thread Robert Milkowski
Hello JS,

Sunday, October 14, 2007, 7:01:28 AM, you wrote:

J> I've been running ZFS against EMC Clariion CX-600 and CX-500s in
J> various configurations, mostly exported disk situations, with a
J> number of kernel flatlining situations. Most of these situations
J> include Page83 data errors in /var/adm/messages during kernel crashes.
J>  As we're outgrowing the speed and size of the arrays I'm
J> considering my next step, and since EMC is proffering ZFS support
J> on the Symmetrix line I was wondering about your experiences with kernel 
drops.
J>  
J>  

I've been also using zfs on CX and I don't see problems you describe.


-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-15 Thread Robert Milkowski
Hello Paul,

If you don't need a support then Sun Cluster 3.2 is free and it works
with ZFS.

What you could do is to setup 3-node cluster with 3 resource groups
each assigned with different primary node and failback set to true.

Of course in that config the storage requirements will be different.

Still using zfs send|recv as a backup to different storage would be a
good idea.




-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HAMMER

2007-10-15 Thread Robert Milkowski
Hello zfs-discuss,

  http://leaf.dragonflybsd.org/mailarchive/kernel/2007-10/msg6.html
  http://leaf.dragonflybsd.org/mailarchive/kernel/2007-10/msg8.html
  

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
 http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-10-15 Thread Eric Schrock
Yes, I think that was the original intent of the project proposal.  It
could probably be reworded to decrease emphasis on a single algorithm,
but I read it as a generic exploration of alternative algorithms.

Pluggable algorithms is tricky, because compression is encoded as a
single 8-bit quantity in the block pointer.  This doesn't make it
impossible, just difficult.  One could imagine, for example, reserving
the top bit to indicate that the remainder of the value is an index into
some auxiliary table that can identify compression schemes in some
extended manner.  This avoids the centralized repository, but introduces
a number of interesting failure modes, such as being unable to open a
pool because it uses an unsupported compression scheme.  All very
doable, but it's a lot of work for (IMO) little gain, not to mention
increased difficulty maintaining compatibility across disparate
versions (what is the set of compression algorithms needed to be 100%
compatible?).

- Eric

On Sun, Oct 14, 2007 at 03:28:24AM -0700, [EMAIL PROTECTED] wrote:
> > I haven't heard from any other core contributors, but this sounds like a
> > worthy project to me.  Someone from the ZFS team should follow through
> > to create the project on os.org[1]
> >
> > Its sounds like like Domingos and Roland might constitute the initial
> > "project team".
> 
> In my opinion, the project should also include an effort in getting LZO
> into ZFS. As an advanced fast but efficient variant.
> 
> For that matter, if it were up to me, there should be an effort in
> modularizing the ZFS compression algorithms into loadable kernel modules,
> also allowing easy addition of algorithms. I suppose the same should apply
> to other components where possible, e.g. the spacemap allocator discussed
> on this list. But I'm a mere C# coder, so I can't really help with that.
> 
> -mg

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on EMC Symmetrix

2007-10-15 Thread Wade . Stuart
It may make sense to post your code level host->emc and your
topology/hba(type and firmware level) info for the systems you are having
the issues on.  EMC setups are very well known to have their reliability
linked to code level and topology -- a machine running 16 code against
backreved emulex + cisco MDS vs 19 against qlogic + mcdata are entirely
different beasts.

-Wade


[EMAIL PROTECTED] wrote on 10/15/2007 05:49:28 AM:

> Hello JS,
>
> Sunday, October 14, 2007, 7:01:28 AM, you wrote:
>
> J> I've been running ZFS against EMC Clariion CX-600 and CX-500s in
> J> various configurations, mostly exported disk situations, with a
> J> number of kernel flatlining situations. Most of these situations
> J> include Page83 data errors in /var/adm/messages during kernel crashes.
> J>  As we're outgrowing the speed and size of the arrays I'm
> J> considering my next step, and since EMC is proffering ZFS support
> J> on the Symmetrix line I was wondering about your experiences with
> kernel drops.
> J>
> J>
>
> I've been also using zfs on CX and I don't see problems you describe.
>
>
> --
> Best regards,
>  Robert Milkowski  mailto:[EMAIL PROTECTED]
>http://milek.blogspot.com
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS recovery tool, intrested?

2007-10-15 Thread Eric Schrock
FYI -

- The "unrecoverable errors = panic" problem is being fixed as part of
  PSARC 2007/567.

- We should be able to recover *some* data when some (but not all)
  toplevel vdevs are faulted.  See 6406289.

- Reading corrupted blocks is a little trickier, but 6186106 is filed to
  cover this.

That being said, such a tool seems interesting until the above bugs/rfes
are fixed.  Given that PSARC 2007/567 is undergoing final testing,
fixing 6406289 would be a good project for anyone who's interested, as
there are some ramifications that make this non-trivial.  Fixing 6186106
is much harder, and requires some careful design decisions.

- Eric

On Mon, Oct 15, 2007 at 12:07:36PM +0200, Mario Goebbels wrote:
> > Having my 700Gb one disk ZFS crashing on me created ample need for a 
> > recovery tool. 
> > 
> > So I spent the weekend creating a tool that lets you list directories and 
> > copy files from any pool on a one disk ZFS filesystem, where for example 
> > the Solaris kernel keeps panicing.
> > 
> > Is there any interest in it being release to the public?
> 
> Would be nice to have.
> 
> Still, it calls for a recovery mode in ZFS, where all available disks of
> the pool are mounted, unrecoverable errors will not cause panics and
> missing data is filled with zeroes. Bonus would be a special cp command,
> that works with ZFS and tells me whether a file copied correctly or not.
> 
> -mg
> 



> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] use 32-bit inode scripts on zfs?

2007-10-15 Thread Tom Davies
Say for an example of old custom 32-bit perl scripts.Can it work with 
128bit ZFS?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] use 32-bit inode scripts on zfs?

2007-10-15 Thread Frank Hofmann
On Mon, 15 Oct 2007, Tom Davies wrote:

> Say for an example of old custom 32-bit perl scripts.Can it work with 
> 128bit ZFS?

That question was posted either here or on some other help aliases 
recently ...

If you have any non-largefile-aware application that must under all 
circumstances be kept alive, run it within a filesystem that's smaller 
than 4GB, or in ZFS case a filesystem with a quota of 4GB.

That'll preserve compatibility "for all eternity".

(A more precise answer needs more information about what '32-bit' means in 
the context of the question)

FrankH.

>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sun's storage product roadmap?

2007-10-15 Thread Richard Elling
I can neither confirm nor deny that I can confirm or deny what somebody else 
said.
http://www.techworld.com/storage/features/index.cfm?featureID=3728&pagtype=samecatsamechan
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun's storage product roadmap?

2007-10-15 Thread Claus Guttesen
> I can neither confirm nor deny that I can confirm or deny what somebody else 
> said.
> http://www.techworld.com/storage/features/index.cfm?featureID=3728&pagtype=samecatsamechan
>   -- richard

No problem, just say yes or no! :-)

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on EMC Symmetrix

2007-10-15 Thread JS
Sun has seen all of this during various problems over the past year and a half, 
but:

CX600 FLARE code 02.07.600.5.027
CX500 FLARE code 02.19.500.5.044

Brocade Fabric, relevant switch models are 4140 (core), 200e (edge), 3800 
(edge).

Sun Branded Emulex HBAs in the following models:

SG-XPCI1FC-EM2   (on various v490s, Update 3/4, kernel 125100-10/120011-14)
Hardware Version   = 1001206d
Driver Version = 2.20k (2007.06.04.09.35)
Optional ROM Version   = 1.50a9
Firmware Version   = 1.91x15


SG-XPCIE1FC-EM4  (on various t2000s, Update 3, kernel 125100-10)
Hardware Version   = 2057706d
Driver Version = 2.20k (2007.06.04.09.35)
Optional ROM Version   = 1.50a9
Firmware Version   = 2.70x4


SG-XPCI1FC-EM4-Z (on a v890, Update 3, kernel 125100-10)
Hardware Version   = 1036406d
Driver Version = 2.20k (2007.06.04.09.35)
Optional ROM Version   = 1.50a8
Firmware Version   = 2.70x1

 What are you running ZFS on?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun's storage product roadmap?

2007-10-15 Thread Rich Teer
On Mon, 15 Oct 2007, Richard Elling wrote:

> I can neither confirm nor deny that I can confirm or deny what somebody else 
> said.
> http://www.techworld.com/storage/features/index.cfm?featureID=3728&pagtype=samecatsamechan

Ooohh: JBOD 1400 (2U, 24 x 2.5" drives).  Someone's been listening!  :-)
(An even smaller JBOD 1U box would also be nice...)

-- 
Rich Teer, SCSA, SCNA, SCSECA, OGB member

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-10-15 Thread Lin Ling

Hi Duff,

The OpenSolaris bug reporting system is not very robust yet. The team is 
aware of it and plans to make it better.
So, the bugs you filed might have been lost.

I have filed bug 6617080 for you.  You should be able to see it thru 
bugs.opensolaris.org tomorrow.
I will contact Larry to get the core file for the bug.

Thanks,
Lin

J Duff wrote:
> I've tried to report this bug through the http://bugs.opensolaris.org/ site 
> twice. The first time on September 17, 2007 with the title "ZFS Kernel Crash 
> During Disk Writes (SATA and SCSI)". The second time on September 19, 2007 
> with the title "ZFS or Storage Subsystem Crashes when Writing to Disk". After 
> initial entry of the bug and confirmation screen, I've never heard anything 
> back. I've search the bug database repeatedly looking for the entry and a 
> corresponding bug ID. I've found nothing familiar.
>
> Larry (from the sd group?) requested I upload the corefile which I did, but I 
> haven't heard from him again.
>
> It would be good if an email were sent to the submitter of a bug indicating 
> the state of the submission. If for some reason it was filtered out, or is in 
> a hold state for a long period of time, the email would be reassuring.
>
> This is a serious bug which causes a crash during heavy disk writes. We 
> cannot complete our quality testing as long as this bug remains. Thanks for 
> you interest.
>
> Duff
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] multi master replication

2007-10-15 Thread Ged
Does anyone know if multi master replication can be done with ZFS.

Use case is 2 data centers that you want to keep in sync.

My understanding is that master slave is possible only.

ged
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multi master replication

2007-10-15 Thread Richard Elling
Ged wrote:
> Does anyone know if multi master replication can be done with ZFS.

What is your definition of multi-master replication?

> Use case is 2 data centers that you want to keep in sync.
> 
> My understanding is that master slave is possible only.

ZFS doesn't do replication, per se.  It is typically used with
something like AVS for remote sites.  See:
http://opensolaris.org/os/project/avs/
http://blogs.sun.com/AVS/entry/avs_and_zfs_seamless

  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-10-15 Thread Nigel Smith
Hello Duff
Thanks for emailing me the source & binary for your test app.

My PC for testing has snv_60 installed. I was about to upgrade to snv_70,
but I thought it might be useful to test with the older version of OpenSolaris
first, in case the problem you are seeing is a regression.

And for the first test, I connected up a Samsung 400Gb sata-2 drive
to a pci-e x1 card which uses the  Silicon Image SiI-3132 chip.
This uses the OpenSolaris 'si3124' driver.
So my ZFS pool is using a single drive.

Ok, I ran your test app, using the parameters you advised, with the addition
 of '-c' to validate with a read, the data created.
And the first run of the test has completed with no problems.
So no crash with this setup.

 # ./gbFileCreate -c -r /mytank -d 1000 -f 1000 -s 6:6

 CREATING DIRECTORIES AND FILES:
 In folder "/mytank/greenbytes.1459",
 creating 1000 directories each containing 1000 files.
 Files range in size from 6 bytes to 6 bytes.

 CHECKING FILE DATA:
 Files Passed = 100, Files Failed = 0.
 Test complete.

For the next test, I am going to swap the Samsung drive over onto 
the motherboard Intel ICH7 sata chip, so then it will be using the 'ahci' 
driver.
But it's late now, so hopefully I will have the time to do that tomorrow.

I have had a look at the source code history for the 'sd' driver, and I see
that there have been quite alot of changes recently.  So if there is a 
problem with that, then maybe I will not experience the problem until I
upgrade to snv70 or latter.
Regards,
Nigel Smith
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss