Dick Davies wrote:
> On 04/10/2007, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
>
>
>> Client A
>> - import pool make couple-o-changes
>>
>> Client B
>> - import pool -f (heh)
>>
>
>
>> Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80:
>> Oct 4 15:03:12 fozzie genunix: [
Wouldn't this be the known feature where a write error to zfs forces a panic?
Vic
On 10/4/07, Ben Rockwood <[EMAIL PROTECTED]> wrote:
> Dick Davies wrote:
> > On 04/10/2007, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
> >
> >
> >> Client A
> >> - import pool make couple-o-changes
> >>
> >> Cli
I think it's a little more sinister than that...
I'm only just trying to import the pool. Not even yet doing any I/O to it...
Perhaps it's the same cause, I don't know...
But I'm certainly not convinced that I'd be happy with a 25K, for
example, panicing just because I tried to import a dud poo
Hi
I have a Netra T1 with 2 int disks. I want to install Sol 10 8/07 and build 2
zones (one as an ftp server and one as an scp server) and would like the system
mirrored.
My thoughts are to use SVM to mirror the / partitions, then build a mirrored
zfs pool using slice 5 on both disks (I know
> Perhaps it's the same cause, I don't know...
>
> But I'm certainly not convinced that I'd be happy with a 25K, for
> example, panicing just because I tried to import a dud pool...
>
> I'm ok(ish) with the panic on a failed write to a non-redundant storage.
> I expect it by now...
>
I agree, forc
> Where does the win come from with "directI/O"? Is it 1), 2), or some
> combination? If its a combination, what's the percentage of each
> towards the win?
>
That will vary based on workload (I know, you already knew that ... :^).
Decomposing the performance win between what is gained as
> This bug was rendered moot via 6528732 in build
> snv_68 (and s10_u5). We
> now store physical devices paths with the vnodes, so
> even though the
> SATA framework doesn't correctly support open by
> devid in early boot, we
But if I read it right, there is still a problem in SATA framework (fai
Jim Mauro writes:
>
> > Where does the win come from with "directI/O"? Is it 1), 2), or some
> > combination? If its a combination, what's the percentage of each
> > towards the win?
> >
> That will vary based on workload (I know, you already knew that ... :^).
> Decomposing the pe
>
> Client A
> - import pool make couple-o-changes
>
> Client B
> - import pool -f (heh)
>
> Client A + B - With both mounting the same pool, touched a couple of
> files, and removed a couple of files from each client
>
> Client A + B - zpool export
>
> Client A - Attempted import and dropped
I'm pleased to announce that the ZFS Crypto project now has Alpha
release binaries that you can download and try. Currently we only have
x86/x64 binaries available, SPARC will be available shortly.
Information on the Alpha release of ZFS Crypto and links for downloading
the binaries is here:
On Thu, Oct 04, 2007 at 08:36:10AM -0600, eric kustarz wrote:
> > Client A
> > - import pool make couple-o-changes
> >
> > Client B
> > - import pool -f (heh)
> >
> > Client A + B - With both mounting the same pool, touched a couple of
> > files, and removed a couple of files from each client
Lori Alt told me that mountrount was a temporary hack until grub
could boot zfs natively.
Since build 62, mountroot support was dropped and I am not convinced
that this is a mistake.
Let's compare the two:
Mountroot:
Pros:
* can have root partition on raid-z: YES
* can have root partit
eric kustarz writes:
> >
> > Anyhow, in the case of DBs, ARC indeed becomes a vestigial organ. I'm
> > surprised that this is being met with skepticism considering that
> > Oracle highly recommends direct IO be used, and, IIRC, Oracle
> > performance was the main motivation to adding DIO to
On Wed, Oct 03, 2007 at 04:31:01PM +0200, Roch - PAE wrote:
> > It does, which leads to the core problem. Why do we have to store the
> > exact same data twice in memory (i.e., once in the ARC, and once in
> > the shared memory segment that Oracle uses)?
>
> We do not retain 2 copies of the sa
On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote:
> ...memory utilisation... OK so we should implement the 'lost cause' rfe.
>
> In all cases, ZFS must not steal pages from other memory consumers :
>
> 6488341 ZFS should avoiding growing the ARC into trouble
>
> So the DB memory
On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote:
> > This bug was rendered moot via 6528732 in build
> > snv_68 (and s10_u5). We
> > now store physical devices paths with the vnodes, so
> > even though the
> > SATA framework doesn't correctly support open by
> > devid in early boot, we
>
Nicolas Williams writes:
> On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote:
> > ...memory utilisation... OK so we should implement the 'lost cause' rfe.
> >
> > In all cases, ZFS must not steal pages from other memory consumers :
> >
> >6488341 ZFS should avoiding growing t
On Thu, Oct 04, 2007 at 06:59:56PM +0200, Roch - PAE wrote:
> Nicolas Williams writes:
> > On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote:
> > > So the DB memory pages should not be _contented_ for.
> >
> > What if your executable text, and pretty much everything lives on ZFS?
>
I'd like to second a couple of comments made recently:
* If they don't regularly do so, I too encourage the ZFS, Solaris
performance, and Sun Oracle support teams to sit down and talk about the
utility of Direct I/O for databases.
* I too suspect that absent Direct I/O (or some ringing en
It fails on my machine because it requires a patch that's deprecated.
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed. If
you have received this email in error please notify the system manager. T
Remember that you have to maintain an entirely separate slice with yet
another boot environment. This causes huge amounts of complexity in
terms of live upgrade, multiple BE management, etc. The old mountroot
solution was useful for mounting ZFS root, but completely unmaintainable
from an install
Nicolas Williams writes:
> On Wed, Oct 03, 2007 at 04:31:01PM +0200, Roch - PAE wrote:
> > > It does, which leads to the core problem. Why do we have to store the
> > > exact same data twice in memory (i.e., once in the ARC, and once in
> > > the shared memory segment that Oracle uses)?
>
Manually installing the obsolete patch 122660-10 has worked fine for me.
Until sun fixes the patch dependencies, I think that is the easiest way.
-Brian
Bruce Shaw wrote:
> It fails on my machine because it requires a patch that's deprecated.
>
> This email and any files transmitted with it are
Update to this. Before destroying the original pool the first time, offline the
disk you plan on re-using in the new pool. Otherwise when you destroy the
original pool for the second time it causes issues with the new pool. In fact,
if you attempt to destroy the new pool immediately after destro
Yeah, the only thing wrong with that patch is that it eats
/etc/sma/snmp/snmpd.conf
All is not lost, your original is copied to
/etc/sma/snmp/snmpd.conf.save in the process.
Rob++
Brian H. Nelson wrote:
> Manually installing the obsolete patch 122660-10 has worked fine for me.
> Until sun fix
It was 120272-12 that caused ths snmp.conf problem and was withdrawn.
120272-13 has replaced it and has that bug fixed.
122660-10 does not have any issues that I am aware of. It is only
obsolete, not withdrawn. Additionally, it appears that the circular
patch dependency is by design if you read
On Mon, Jul 16, 2007 at 09:36:06PM -0700, Stuart Anderson wrote:
> Running Solaris 10 Update 3 on an X4500 I have found that it is possible
> to reproducibly block all writes to a ZFS pool by running "chgrp -R"
> on any large filesystem in that pool. As can be seen below in the zpool
> iostat outp
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
the dud pool.
But, maybe I'm missing your point?
Nathan.
eric kustarz wrote:
>>
>> Client A
>> - import pool make
On Fri, Oct 05, 2007 at 08:20:13AM +1000, Nathan Kroenert wrote:
> Erik -
>
> Thanks for that, but I know the pool is corrupted - That was kind if the
> point of the exercise.
>
> The bug (at least to me) is ZFS panicing Solaris just trying to import
> the dud pool.
>
> But, maybe I'm missing
Hi,
Using bootroot I can do seperate /usr filesystem since b64. I can also
do snapshot, clone and compression.
Rgds,
Andre W.
Kugutsumen wrote:
> Lori Alt told me that mountrount was a temporary hack until grub
> could boot zfs natively.
> Since build 62, mountroot support was dropped and I a
Awesome.
Thanks, Eric. :)
This type of feature / fix is quite important to a number of the guys in
the our local OSUG. In particular, they are adamant that they cannot use
ZFS in production until it stops panicing the whole box for isolated
filesystem / zpool failures.
This will be a big step
On 30/09/2007, William Papolis <[EMAIL PROTECTED]> wrote:
> Henk,
>
> By upgrading do you mean, rebooting and installing Open Solaris from DVD or
> Network?
>
> Like, no Patch Manager install some quick patches and updates and a quick
> reboot, right?
You can live upgrade and then do a quick reb
> 5) DMA straight from user buffer to disk avoiding a copy.
This is what the "direct" in "direct i/o" has historically meant. :-)
> line has been that 5) won't help latency much and
> latency is here I think the game is currently played. Now the
> disconnect might be because people might feel th
...and eventually in a read-write capacity:
http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write-
developer-preview-1-1-for-leopard/
Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac
OS X to Developers this week. The preview updates a previous build
released on Ju
Dale Ghent wrote:
> ...and eventually in a read-write capacity:
>
> http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write-
> developer-preview-1-1-for-leopard/
>
> Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac
> OS X to Developers this week. The preview updates a p
I'm posting here as this seems to be a zfs issue. We also have an open ticket
with Sun support and I've heard another large sun customer also is reporting
this as an issue.
Basic Problem: Create a zfs file system and set shareiscsi to on. On a vmware
esx server discover that iscsi target. I
I've been thinking about this for awhile, but Anton's analysis makes me
think about it even more:
We all love ZFS, right. It's futuristic in a bold new way, which many
virtues, I won't preach tot he choir. But to make it all glue together
has some necessary CPU/Memory intensive operations
Please do share how you managed to have a separate ZFS /usr since
b64; there are dependencies to /usr and they are not documented.
-kv doesn't help too. I tried added /usr/lib/libdisk* to a /usr/lib
dir on the root partition and failed.
Jurgen also pointed that there are two related bugs alre
38 matches
Mail list logo