Re: [zfs-discuss] Comments on green-bytes

2008-10-07 Thread C. Bergström
Joerg Schilling wrote:
> Tim <[EMAIL PROTECTED]> wrote:
>
>   
>> ZFS is licensed under the CDDL, and as far as I know does not require
>> derivative works to be open source.  It's truly free like the BSD license in
>> that companies can take CDDL code, modify it, and keep the content closed.
>> They are not forced to share their code.  That's why there are "closed"
>> patches that go into mainline Solaris, but are not part of OpenSolaris.
>> 
>
> The CDDL requires to make modifications public.
>
>
>
>   
>> While you may not like it, this isn't the GPL.
>> 
>
> The GPL is more free than many people may believe now ;-)
>
> The GPL is unfortunately missunderstood by most people.
>
> The GPL allows you to link GPLd projects against other code
> of _any_ other license that does not forbid you some basic things.
> This is because the GPL ends at the "work limit". The binary in this
> case is just a container for more than one work and the license of
> the binary is the aggregation of the requirements of the licenses
> in use by the sources.
>
>
> The influence of the CDDL ends at file level. All changes are covered by
> the copyleft from the CDDL.
>   

My apologies to Matt as I didn't expect so much noise over the issue, 
but mostly for things to be clarified more clearly.  If anything 
positive can still come from this let us know.

./C
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-07 Thread gm_sjo
2008/10/6 mike <[EMAIL PROTECTED]>:
> I am trying to finish building a system and I kind of need to pick
> working NIC and onboard SATA chipsets (video is not a big deal - I can
> get a silent PCIe card for that, I already know one which works great)
>
> I need 8 onboard SATA. I would prefer Intel CPU. At least one gigabit
> port. That's about it.

I am using an Intel S3210SH server board, which has two onboard Intel
gigabit interfaces and 6 onboard SATA - all of which are supported. I
am also using a Supermicro 8-port SATA card (PCI-X), which again is
the recommended item for use!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-07 Thread mike
Yeah, I was scoping an Intel board - only one I could find had 8 SATA.

However, couldn't find much info on support for those either. For this
machine I need 16 ports and want 8 onboard SATA. It shouldn't be
difficult, but I don't want to order something, find out it's not
compatible, and have to return it online...


On Tue, Oct 7, 2008 at 1:33 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
> 2008/10/6 mike <[EMAIL PROTECTED]>:
>> I am trying to finish building a system and I kind of need to pick
>> working NIC and onboard SATA chipsets (video is not a big deal - I can
>> get a silent PCIe card for that, I already know one which works great)
>>
>> I need 8 onboard SATA. I would prefer Intel CPU. At least one gigabit
>> port. That's about it.
>
> I am using an Intel S3210SH server board, which has two onboard Intel
> gigabit interfaces and 6 onboard SATA - all of which are supported. I
> am also using a Supermicro 8-port SATA card (PCI-X), which again is
> the recommended item for use!
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-07 Thread Johan Hartzenberg
Some people wrote:

>
> > covered code.   Since Sun owns that code they would need to rattle the
> > cage.  Sun? Anyone have any talks with these guys yet?
>
> Isn't CDDL file based so they could implement all the new functionality in
>
>
Wouldn't it be great if programmers could just focus on writing code rather
than having to worry about getting sued over whether someone else is able or
not to make a derivative program from their code?


-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-07 Thread Robert Milkowski
Hello Nicolas,

Monday, October 6, 2008, 10:51:58 PM, you wrote:

NW> I'm pretty sure that local RAM beats remote-anything, no matter what the
NW> "anything" (as long as it isn't RAM) and what the protocol to get to it
NW> (as long as it isn't a normal backplane).  (You could claim with NUMA
NW> memory can be remote, so let's say that for a reasonable value of
NW> "remote.")

IIRC the total throughput to remote memory over fire link could be
faster than to local memory... just a funny thing I remembered.

Not that it is relevant here.



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS confused about disk controller

2008-10-07 Thread Caryl Takvorian
Hi all,

Please keep me on cc: since I am not subscribed to either lists.


I have a weird problem with my OpenSolaris 2008.05 installation (build 
96) on my Ultra 20 workstation.
For some reason, ZFS has been confused and has recently starting to 
believe that my zpool is using a device which  does not exist !

prodigal:zfs #zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c1t0d0s0  ONLINE   0 0 0

errors: No known data errors


The c1t0d0s0 device doesn't exist on my system. Instead, my disk is 
attached to c5t0d0s0 as shown by

prodigal:zfs #format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c5t0d0 
  /[EMAIL PROTECTED],0/pci108e,[EMAIL PROTECTED]/[EMAIL PROTECTED],0

or

prodigal:zfs #cfgadm
Ap_Id  Type Receptacle   Occupant 
Condition
sata0/0::dsk/c5t0d0disk connectedconfigured   ok



What is really annoying is that I attempted to update my current 
OpenSolaris build 96 to the latest (b98)  by using

# pkg image-update

The update went well, and at the end it selected the new BE to be 
activated upon reboot, but failed when attempting to modify the grub 
entry because install_grub asks ZFS what is my boot device and gets back 
the wrong device (of course, I am using ZFS as my root filesystem, 
otherwise it wouldn't be fun).

When I manually try to run install_grub, this is the error message I get:

prodigal:zfs #/tmp/tmpkkEF1W/boot/solaris/bin/update_grub -R /tmp/tmpkkEF1W
Creating GRUB menu in /tmp/tmpkkEF1W
bootadm: fstyp -a on device /dev/rdsk/c1t0d0s0 failed
bootadm: failed to get pool for device: /dev/rdsk/c1t0d0s0
bootadm: fstyp -a on device /dev/rdsk/c1t0d0s0 failed
bootadm: failed to get pool name from /dev/rdsk/c1t0d0s0
bootadm: failed to create GRUB boot signature for device: /dev/rdsk/c1t0d0s0
bootadm: failed to get grubsign for root: /tmp/tmpkkEF1W, device 
/dev/rdsk/c1t0d0s0
Installing grub on /dev/rdsk/c1t0d0s0
cannot open/stat device /dev/rdsk/c1t0d0s2


The worst bit, is that now beadm refuses to reactivate my current 
running OS to be used upon the next reboot.
So, the next time I reboot, my system is probably never going to come 
back up.


prodigal:zfs #beadm list

BEActive Mountpoint Space   Policy Created 
---- -- -   -- --- 
opensolaris-5 N  /  128.50M static 2008-09-09 13:03
opensolaris-6 R  /tmp/tmpkkEF1W 52.19G  static 2008-10-07 10:14


prodigal:zfs #export BE_PRINT_ERR=true
prodigal:zfs #beadm activate opensolaris-5
be_do_installgrub: installgrub failed for device c1t0d0s0.
beadm: Unable to activate opensolaris-5


So, how can I force zpool to accept that my disk device really is on 
c5t0d0s0 and forget about c1?

Since the file /etc/zfs/zpool.cache contains a reference to 
/dev/dsk/c1t0d0s0  I have rebuilt the boot_archive after removing it 
from the ramdisk, but I've got cold feet about rebooting without 
confirmation.


Has anyone seen this before or has any idea how to fix this situation ?


Thanks


Caryl


-- 
~~~
Caryl Takvorian [EMAIL PROTECTED]
ISV Engineering phone : +44 (0)1252 420 686
Sun Microsystems UK mobile: +44 (0)771 778 5646



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-07 Thread Casper . Dik


>Some people wrote:
>
>>
>> > covered code.   Since Sun owns that code they would need to rattle the
>> > cage.  Sun? Anyone have any talks with these guys yet?
>>
>> Isn't CDDL file based so they could implement all the new functionality in
>>
>>
>Wouldn't it be great if programmers could just focus on writing code rather
>than having to worry about getting sued over whether someone else is able or
>not to make a derivative program from their code?

Yep, but in THIS world it *is* an important consideration.


http://www.linux-watch.com/news/NS3761924232.html
http://www.linuxdevices.com/news/NS7575957635.html
http://www.theinquirer.net/en/inquirer/news/2004/01/06/kiss-technology-accused-of-stealing-free-software


If you use someone else's code, make sure you read the license and follow 
it; then you should be fine.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-07 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 10/07/2008 07:15:46 AM:

> Hello Wade,
>
> Monday, October 6, 2008, 8:56:12 PM, you wrote:
>
> WSfc> [EMAIL PROTECTED] wrote on 10/06/2008 01:57:10
PM:
>
> >> Hi all
> >>
> >> In another thread a short while ago.. A cool little movie with some
> >> gumballs was all we got to learn about green-bytes.  The product
> >> launched and maybe some of the people that follow this list have had a
> >> chance to take a look at the code/product more closely?  Wstuart asked
> >> how they were going to handle section 3.1 of the CDDL, but nobody from
> >> green-bytes even made an effort to clarify this.  I called since I'm
> >> consulting with companies who are potential customers, but are any ofs
> >> developers even subscribed to this list?
> >>
> >> After a call and exchanging a couple emails I'm left with the
impression
> >> the source will *not* be released publicly or to customers.  I'm not
the
> >> copyright holder, a legal expert, or even a customer, but can someone
> >> from Sun or green-bytes make a comment.  I apologize for being a bit
off
> >> topic, but is this really acceptable to the community/Sun in general?
> >> Maybe the companies using Solaris and NetApp don't care about source
> >> code, but then the whole point of opening Solaris is just reduced to
> >> marketing hype.
> >>
>
> WSfc> Yes,  this would be interesting.  CDDL requires them to release
code for
> WSfc> any executable version they ship.  Considering they claim to
> have "...start
> WSfc> with ZFS and makes it better"  it sounds like they have modified
CDDL
> WSfc> covered code.   Since Sun owns that code they would need to rattle
the
> WSfc> cage.  Sun? Anyone have any talks with these guys yet?
>
> Isn't CDDL file based so they could implement all the new functionality
in
> new files and only added some includes and couple of useless (if
> provided alone) changes.
>

Robert,

  Yes -- file based and derivative code based (copy covered code to a
new file and that file is now covered). New code in a new file is not
automatically covered and the authors choice.  That said,  if they have
added dedup to zfs they may have taken extraordinary steps to segment their
code from covered code.  My hunch is they did not.  Everything from
resilver, zil etc would need to be dedup aware.  Either case,  release the
required code and there is no harm no foul right?  If it is stubs,  then so
be it.  I am more interested to see if they implemented it the same way I
started to or if it is something new.  If it is code complete and all
covered even better.

-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-07 Thread Robert Milkowski
Hello Wade,

Monday, October 6, 2008, 8:56:12 PM, you wrote:

WSfc> [EMAIL PROTECTED] wrote on 10/06/2008 01:57:10 PM:

>> Hi all
>>
>> In another thread a short while ago.. A cool little movie with some
>> gumballs was all we got to learn about green-bytes.  The product
>> launched and maybe some of the people that follow this list have had a
>> chance to take a look at the code/product more closely?  Wstuart asked
>> how they were going to handle section 3.1 of the CDDL, but nobody from
>> green-bytes even made an effort to clarify this.  I called since I'm
>> consulting with companies who are potential customers, but are any of
>> developers even subscribed to this list?
>>
>> After a call and exchanging a couple emails I'm left with the impression
>> the source will *not* be released publicly or to customers.  I'm not the
>> copyright holder, a legal expert, or even a customer, but can someone
>> from Sun or green-bytes make a comment.  I apologize for being a bit off
>> topic, but is this really acceptable to the community/Sun in general?
>> Maybe the companies using Solaris and NetApp don't care about source
>> code, but then the whole point of opening Solaris is just reduced to
>> marketing hype.
>>

WSfc> Yes,  this would be interesting.  CDDL requires them to release code for
WSfc> any executable version they ship.  Considering they claim to have 
"...start
WSfc> with ZFS and makes it better"  it sounds like they have modified CDDL
WSfc> covered code.   Since Sun owns that code they would need to rattle the
WSfc> cage.  Sun? Anyone have any talks with these guys yet?

Isn't CDDL file based so they could implement all the new functionality in
new files and only added some includes and couple of useless (if
provided alone) changes.

?


Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread Matthew C Aycock
I recently ran into a problem for the second time with ZFS mirrors. I mirror 
between two different physical arrays for some of my data. One array (SE3511) 
had a catastrophic failure and was unresponsive. Thus, according to the ZFS in 
s10u3 it just basically waits for the array to come back and hangs pretty much 
all IO to the zpool. I was told by Sun service that there were enhancements in 
the upcoming S10 10/08 release that will help. 

My understanding of the code being delivered with S10 10/08 is that on 2-way 
mirrors (which is what I use) that if this same situation occurs again, ZFS 
will allow reads to happen but writes are still going to be queued until the 
other half of the mirror comes back.

Is it just me or have we gone backwards? The whole point of mirroring is so 
that if half the mirror goes we survive and can fix the problem with little to 
NO impact to the running system. Is this really true? With ZFS root also being 
available in S10 10/08 I would not want it anywhere near my root filesystem if 
this is really the behavior.

Any information would be GREATLY appreciated!

BlueUmp
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-07 Thread Joerg Schilling
[EMAIL PROTECTED] wrote:

>   Yes -- file based and derivative code based (copy covered code to a
> new file and that file is now covered). New code in a new file is not
> automatically covered and the authors choice.  That said,  if they have
> added dedup to zfs they may have taken extraordinary steps to segment their
> code from covered code.  My hunch is they did not.  Everything from
> resilver, zil etc would need to be dedup aware.  Either case,  release the
> required code and there is no harm no foul right?  If it is stubs,  then so
> be it.  I am more interested to see if they implemented it the same way I
> started to or if it is something new.  If it is code complete and all
> covered even better.

If the code in a new file is derived from code in a file covered by the CDDL,
it may be that you need to provide this code under CDDL too.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-07 Thread Bob Friesenhahn
On Tue, 7 Oct 2008, [EMAIL PROTECTED] wrote:
>>>
>> Wouldn't it be great if programmers could just focus on writing 
>> code rather than having to worry about getting sued over whether 
>> someone else is able or not to make a derivative program from their 
>> code?
>
> Yep, but in THIS world it *is* an important consideration.

Definitely.  Copyrights and licenses should always be observed and 
respected.  In today's "MP3" generation where copyright has been 
reduced by pimply-faced teenagers to less value than toilet paper, 
many people have taken up a habit of not respecting anyone's 
copyrights or licenses.  Meanwhile, the legal system still supports 
copyrights and violating products may be shut down overnight due to 
court order.

If a copyright or license violation is suspected, then the copyright 
holder should be contacted since the copyright holder is the only one 
with the legal right to persue violators.  There is little value to 
"guilty until proven innocent" attacks on mailing lists.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-07 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 10/07/2008 10:59:06 AM:

> On Tue, 7 Oct 2008, [EMAIL PROTECTED] wrote:
> >>>
> >> Wouldn't it be great if programmers could just focus on writing
> >> code rather than having to worry about getting sued over whether
> >> someone else is able or not to make a derivative program from their
> >> code?
> >
> > Yep, but in THIS world it *is* an important consideration.
>
> Definitely.  Copyrights and licenses should always be observed and
> respected.  In today's "MP3" generation where copyright has been
> reduced by pimply-faced teenagers to less value than toilet paper,
> many people have taken up a habit of not respecting anyone's
> copyrights or licenses.  Meanwhile, the legal system still supports
> copyrights and violating products may be shut down overnight due to
> court order.
>
> If a copyright or license violation is suspected, then the copyright
> holder should be contacted since the copyright holder is the only one
> with the legal right to persue violators.  There is little value to
> "guilty until proven innocent" attacks on mailing lists.
>

Bob,

  The mailing list happens to be run by the copyright holder and
interested parties (zfs authors) with the ability to act inside the
copyright holder are on this list -- it seems to be a valid medium to
notify. *shrug*  There are no "guilty until proven innocent" attacks here,
just a few people that have noted (and even contacted the vendor to verify)
that the code is not available as it is expected to be under common views
of CDDL.  Further, the discussion has expanded into what people believe the
CDDL requirements to be.  Al of this discussion could be headed off with a
simple "we are on it" from one of the parties involved.

-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread kristof
I don't know if this is already available in S10 10/08, but in opensolaris 
build > 71 you can set the:

zpool failmode property

see: 
http://opensolaris.org/os/community/arc/caselog/2007/567/

available options are:

 The property can be set to one of three options: "wait", "continue",
or "panic".

The default behavior will be to "wait" for manual intervention before
allowing any further I/O attempts. Any I/O that was already queued would
remain in memory until the condition is resolved. This error condition can
be cleared by using the 'zpool clear' subcommand, which will attempt to resume
any queued I/Os.

The "continue" mode returns EIO to any new write request but attempts to
satisfy reads. Any write I/Os that were already in-flight at the time
of the failure will be queued and maybe resumed using 'zpool clear'.

Finally, the "panic" mode provides the existing behavior that was explained
above.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread Ross
As far as I can tell, it all comes down to whether ZFS detects the failure 
properly, and what commands you use as it's recovering.

Running "zpool status" is a complete no no if your array is degraded in any 
way.  This is capable of locking up zfs even when it would otherwise have 
recovered itself.  If you had zpool status hang, this probably happened to you.

It also appears that ZFS is at the mercy of your drivers when it comes to 
detecting and reacting to the failure.  From my experience this means that when 
a device does fail, ZFS may react instantly and keep your mirror online, it may 
take 3 minutes (waiting for iSCSI to timeout), or it may take a long time (if 
FMA is involved).

I've seen ZFS mirrors protect data nicely, but I've also seen a lot of very odd 
fail modes.  I'd quite happily run ZFS in production, but you can be damn sure 
it'd be on Sun hardware, and I'd test as many fail modes as I could before it 
went live.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread Wade . Stuart

[EMAIL PROTECTED] wrote on 10/07/2008 01:10:51 PM:

> I don't know if this is already available in S10 10/08, but in
> opensolaris build > 71 you can set the:
>
> zpool failmode property
>
> see:
> http://opensolaris.org/os/community/arc/caselog/2007/567/
>
> available options are:
>
>  The property can be set to one of three options: "wait", "continue",
> or "panic".
>
> The default behavior will be to "wait" for manual intervention before
> allowing any further I/O attempts. Any I/O that was already queued would
> remain in memory until the condition is resolved. This error condition
can
> be cleared by using the 'zpool clear' subcommand, which will attemptto
resume
> any queued I/Os.
>
> The "continue" mode returns EIO to any new write request but attempts to
> satisfy reads. Any write I/Os that were already in-flight at the time
> of the failure will be queued and maybe resumed using 'zpool clear'.
>
> Finally, the "panic" mode provides the existing behavior that was
explained
> above.

Huh?  I was under the impression that this was for catastrophic write
issues (no paths to storage at all) not just one side of a mirror being
down?  I run mostly zraid2 and have not tested mirror breakage but am I
wrong in assuming that like any other mirroring system (hw or software)
when you lose one side of a mirror for writes that the expected result is
the filesystem stays online and error free while the disk(s) in question
are marked as down/failed/offline?

-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread Carson Gaspar
kristof wrote:
> I don't know if this is already available in S10 10/08, but in opensolaris 
> build>  71 you can set the:
>
> zpool failmode property
>
> see:
> http://opensolaris.org/os/community/arc/caselog/2007/567/
>
> available options are:
>
>   The property can be set to one of three options: "wait", "continue",
> or "panic".

I'm fairly certain that this isn't what the OP was concerned about.

The OP appeared to be concerned about ZFS's behaviour when one half of a 
mirror went away. As the pool is merely degraded, ZFS will continue to 
allow reads and writes... eventually...

Depending on _how_ the disk is failing, I/O may become glacial, or 
freeze entirely for several minutes before recovering, or hiccup briefly 
and then go on normally. ZFS is layered to the point where stacked 
timeouts _may_ become unreasonably large (see many previous threads). 
And a single "slow" device will drag the rest of the volume with it 
(e.g. a disk that demands 10 retries per write).

SVM suffers from some of the same problems, although not (in my 
experience) to the same degree. SVM tends to err on the side of "fail 
the disk quickly", whereas ZFS tries very very hard to make all I/O 
succeed, and relies on the fault management system or I/O stack to 
decide to fail things.

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread Eric Schrock
On Tue, Oct 07, 2008 at 11:42:57AM -0700, Ross wrote:
> 
> Running "zpool status" is a complete no no if your array is degraded
> in any way.  This is capable of locking up zfs even when it would
> otherwise have recovered itself.  If you had zpool status hang, this
> probably happened to you.

FYI, this is bug 6667208 fixed in build 100 of nevada.

- Eric

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: Re: ZSF Solaris]

2008-10-07 Thread Jens Elkner
On Tue, Oct 07, 2008 at 11:35:47AM +0530, Pramod Batni wrote:
  
>The reason why the (implicit) truncation could be taking long  might be due
>to
>6723423 [6]UFS slow following large file deletion with fix for 6513858
>installed
> 
>To overcome this problem for S10, the offending patch 127866-03 can be
>removed.
> 
>   It is not yet fixed in snv. A fix is being developed, not sure which
>build it would be available in.

OK - thanx for your answer. Since the fixes in 03-05 seem to be
important, I'll try to initiate an escalation of the case -  does it help
to get it in a little bit earlier?

Regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool imports are slow when importing multiple storage pools

2008-10-07 Thread Jens Elkner
On Mon, Oct 06, 2008 at 05:08:13PM -0700, Richard Elling wrote:
> Scott Williamson wrote:
> > Speaking of this, is there a list anywhere that details what we can 
> > expect to see for (zfs) updates in S10U6?
> 
> The official release name is "Solaris 10 10/08"

Ooops - no beta this time?

Regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread Ross Smith

Oh cool, that's great news.  Thanks Eric.



> Date: Tue, 7 Oct 2008 11:50:08 -0700
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] ZFS Mirrors braindead?
> 
> On Tue, Oct 07, 2008 at 11:42:57AM -0700, Ross wrote:
>> 
>> Running "zpool status" is a complete no no if your array is degraded
>> in any way.  This is capable of locking up zfs even when it would
>> otherwise have recovered itself.  If you had zpool status hang, this
>> probably happened to you.
> 
> FYI, this is bug 6667208 fixed in build 100 of nevada.
> 
> - Eric
> 
> --
> Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock

_
Discover Bird's Eye View now with Multimap from Live Search
http://clk.atdmt.com/UKM/go/111354026/direct/01/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss