On 25-May-07, at 7:28 PM, John Plocher wrote:
...
I found that the V440's original 72Gb drives had been "upgraded"
to Dell 148Gb Fujitsu drives, and the Sun versions of those drives
(same model number...) had different firmware
You can't get hold of another one of the same drive?
--Toby
___
May 25 23:32:59 summer unix: [ID 836849 kern.notice]
May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740:
May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf
Page fault) rp=ff00232c3a80 addr=490 occurred in module "unix" due to a
NULL pointer dereference
May
On 5/25/07, John Plocher <[EMAIL PROTECTED]> wrote:
One of the raidz pool drives failed. When I went to replace it,
I found that the V440's original 72Gb drives had been "upgraded"
to Dell 148Gb Fujitsu drives, and the Sun versions of those drives
(same model number...) had different firmware, a
On Fri, 2007-05-25 at 15:46 -0700, Eric Schrock wrote:
> On Fri, May 25, 2007 at 03:39:11PM -0700, Mike Dotson wrote:
> >
> > In fact the console-login depends on filesystem/minimal which to me
> > means minimal file systems not all file systems and there is no software
> > dependent on console-lo
On Fri, May 25, 2007 at 03:39:11PM -0700, Mike Dotson wrote:
>
> In fact the console-login depends on filesystem/minimal which to me
> means minimal file systems not all file systems and there is no software
> dependent on console-login - where's the disconnect?
>
You're correct - I thought cons
Given that there are a bunch of filesystems in the pool, each
with some set of properties ..., what is the easiest way to
move the data and metadata back and forth without losing
anything, and without having to manually recreate the
metainfo/properties?
AFAIK, your only choices are:
A. Write/fi
I didn't mean to imply that it wasn't technically possible, only that
there is no "one size fits all" solution for OpenSolaris as a whole.
Even getting this to work in an easily tunable form is quite tricky,
since you must dynamically determine dependencies in the process
(filesystem/minimal vs. f
On Fri, 2007-05-25 at 15:19 -0700, Eric Schrock wrote:
> This has been discussed many times in smf-discuss, for all types of
> login. Basically, there is no way to say "console login for root
> only". As long as any user can log in, we need to have all the
> filesystems mounted because we don't k
Why not simply have a SMF sequence that does
early in boot, after / and /usr are mounted:
create /etc/nologin (contents="coming up, not ready yet")
enable login
later in boot, when user filesystems are all mounted:
delete /etc/nologin
Wouldn't this would give the
Thru a sequence of good intentions, I find myself with a raidz'd
pool that has a failed drive that I can't replace.
We had a generous department donate a fully configured V440 for
use as our departmental server. Of course, I installed SX/b56
on it, created a pool with 3x 148Gb drives and made a
On Fri, May 25, 2007 at 03:01:20PM -0700, Mike Dotson wrote:
> On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote:
> > Mike Dotson wrote:
> > > On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:
> >
> > > Would help in many cases where an admin needs to work on a system but
> > > doesn't need, sa
>On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote:
>> Mike Dotson wrote:
>> > On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:
>>
>> > Would help in many cases where an admin needs to work on a system but
>> > doesn't need, say 20k users home directories mounted, to do this work.
>> >
>>
On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote:
> Mike Dotson wrote:
> > On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:
>
> > Would help in many cases where an admin needs to work on a system but
> > doesn't need, say 20k users home directories mounted, to do this work.
> >
> So single
Mike Dotson wrote:
On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:
Bill Sommerfeld wrote:
IMHO, there should be no need to put any ZFS filesystems in /etc/vfstab,
but (this is something of a digression based on discussion kicked up by
PSARC 2007/297) it's become clear to me that ZFS
On 5/24/07, Tom Buskey <[EMAIL PROTECTED]> wrote:
> Linux and Windows
> as well as the BSDs) are all relative newcomers to
> the 64-bit arena.
The 2nd non-x86 port of Linux was to the Alpha in 1999 (98?) by Linus no less.
In 1994 to be precise. In 1999 Linux 2.2 got released, which supported
On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:
> Bill Sommerfeld wrote:
> > IMHO, there should be no need to put any ZFS filesystems in /etc/vfstab,
> > but (this is something of a digression based on discussion kicked up by
> > PSARC 2007/297) it's become clear to me that ZFS filesystems *shou
7;s a blog
entry that I just posted on the subject:
http://blogs.sun.com/lalt/date/20070525
Weigh in if you care.
IMHO, there should be no need to put any ZFS filesystems in /etc/vfstab,
but (this is something of a digression based on discussion kicked up by
PSARC 2007/297) it's bec
Prior to rebooting my system (S10U2) yesterday, I had half
a dozen ZFS shares active...
Today, how that I look at this, I find I have only 1 of them is
being exported through NFS.
# zfs list -o name,sharenfs NAME SHARENFS
biscuit off
biscuit/crashes off
biscu
Build 64a has bug 6553537 (zfs root fails to boot from a
snv_63+zfsboot-pfinstall netinstall image), for which I
don't have a ready workaround. So I recommend waiting
for build 65 (which should be out soon, I think).
Lori
Al Hopper wrote:
Hi Lori,
Are there any changes to build 64a that will
og
> entry that I just posted on the subject:
>
> http://blogs.sun.com/lalt/date/20070525
>
> Weigh in if you care.
IMHO, there should be no need to put any ZFS filesystems in /etc/vfstab,
but (this is something of a digression based on discussion kicked up by
PSARC 2007/
mounts.
> >Instead of writing up the issues again, here's a blog
> >entry that I just posted on the subject:
> >
> >http://blogs.sun.com/lalt/date/20070525
> >
> >Weigh in if you care.
>
> ZFS is a paradigm shift and Nevada has not been released. There
Hi Lori,
Are there any changes to build 64a that will affect ZFS bootability?
Will the conversion script for build 62 still do its magic?
Thanks,
Al Hopper Logical Approach Inc, Plano, TX. [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT
OpenSolaris Govern
he subject:
http://blogs.sun.com/lalt/date/20070525
Weigh in if you care.
ZFS is a paradigm shift and Nevada has not been released. Therefore I
vote for implementing it the "ZFS way" - going forward. Place the burden
on the "other" developers to fix their "bug
ect:
http://blogs.sun.com/lalt/date/20070525
Weigh in if you care.
Interesting. Is there an ARC case that is related to some of these issues?
The ARC case for using zfs as a root file system is PSARC/2006/370,
but there isn't much there yet. I'm preparing the documents for
the cas
> http://blogs.sun.com/lalt/date/20070525
>
> Weigh in if you care.
How about having the vfstab file virtualized, like a special node? Upon
reading it, the system would take the actual cloaked vfstab and add the
root and other ZFS entries to it on the fly. The same for writing to it,
t:
>
> http://blogs.sun.com/lalt/date/20070525
>
> Weigh in if you care.
Interesting. Is there an ARC case that is related to some of these issues?
---Bob
This message posted from opensolaris.org
___
zfs-discuss maili
Albert Chin wrote:
I don't think you want to if=/dev/zero on ZFS. There's probably some
optimization going on. Better to use /dev/urandom or concat n-many
files comprised of random bits.
Unless you have turned on compression, that is not the case. By default
there is no optimization for all z
On May 25, 2007, at 11:22 AM, Roch Bourbonnais wrote:
Le 22 mai 07 à 01:11, Nicolas Williams a écrit :
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS sync semantics and ZFS sync
Malachi de Ælfweald wrote:
No, I did mean 'snapshot -r' but I thought someone on the list said
that the '-r' wouldn't work until b63... hmmm...
'snapshot -r' is available before b62, however, '-r' may run into a
stack overflow (bug 6533813) which is fixed in b63.
Lin
_
On Fri, May 25, 2007 at 09:54:04AM -0700, Grant Kelly wrote:
> > It would also be worthwhile doing something like the following to
> > determine the max throughput the H/W RAID is giving you:
> > # time dd of= if=/dev/zero bs=1048576 count=1000
> > or a 2Gbps 6140 with 300GB/10K drives, we get ~46M
> It would also be worthwhile doing something like the
> following to
> determine the max throughput the H/W RAID is giving
> you:
> # time dd of= if=/dev/zero bs=1048576
> count=1000
> or a 2Gbps 6140 with 300GB/10K drives, we get ~46MB/s
> on a
> single-drive RAID-0 array, ~83MB/s on a 4-disk RA
Le 22 mai 07 à 16:23, Dick Davies a écrit :
Take off every ZIL!
http://number9.hellooperator.net/articles/2007/02/12/zil-
communication
Cause client corrupt but also database corruption and just about
anything that carefully manages data.
Yes the zpool will survive, but it may be t
Le 22 mai 07 à 03:18, Frank Cusack a écrit :
On May 21, 2007 6:30:42 PM -0500 Nicolas Williams
<[EMAIL PROTECTED]> wrote:
On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote:
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
> On Mon, May 21, 2007 at 06:09:46PM -0500,
On Fri, May 25, 2007 at 12:01:45PM -0400, Andy Lubel wrote:
> Im using:
>
> zfs set:zil_disable 1
>
> On my se6130 with zfs, accessed by NFS and writing performance almost
> doubled. Since you have BBC, why not just set that?
I don't think it's enough to have BBC to justify zil_disable=1.
B
Le 22 mai 07 à 01:21, Albert Chin a écrit :
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS sync semantics and ZFS
>> Won't disabling ZIL minimize the chance of a consistent zfs-
>> filesystem
>> if - for some reason - the server did an unplanned reboot?
>
> ZIL in ZFS is only used to speed-up various workloads, it has
> nothing to
> do with file system consistency. ZFS is always consistent on disk no
> matter
Le 22 mai 07 à 01:11, Nicolas Williams a écrit :
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS sync semantics and ZFS sync
semantics
conspire against single-threaded performanc
/lalt/date/20070525
Weigh in if you care.
Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks to everyone for their help! yes dtrace did help and I found that in my
layered driver, the prop_op entry point had an error in setting the [Ss]ize
dynamic property, and apparently that's what ZFS looks for, not just Nblocks!
what took me so long in getting to this error was that the drive
Im using:
zfs set:zil_disable 1
On my se6130 with zfs, accessed by NFS and writing performance almost
doubled. Since you have BBC, why not just set that?
-Andy
On 5/24/07 4:16 PM, "Albert Chin"
<[EMAIL PROTECTED]> wrote:
> On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
>>
On Fri, May 25, 2007 at 12:14:45AM -0400, Torrey McMahon wrote:
> Albert Chin wrote:
> >On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
> >
> >
> >>I'm getting really poor write performance with ZFS on a RAID5 volume
> >>(5 disks) from a storagetek 6140 array. I've searched the web a
Won't disabling ZIL minimize the chance of a consistent zfs-
filesystem
if - for some reason - the server did an unplanned reboot?
ZIL in ZFS is only used to speed-up various workloads, it has
nothing to
do with file system consistency. ZFS is always consistent on disk no
matter if you use
No, I did mean 'snapshot -r' but I thought someone on the list said that the
'-r' wouldn't work until b63... hmmm...
Well, realistically, all of us new to this should probably know how to patch
our system before we put any useful data on it anyway, right? :)
Thanks,
Mal
On 5/25/07, Constantin G
hi Shweta;
First thing is to look for all kernel function return that errno (25
I think) during your test.
dtrace -n 'fbt:::return/arg1 == 25/[EMAIL PROTECTED]()}'
More verbose but also useful :
dtrace -n 'fbt:::return/arg1 == 25/[EMAIL PROTECTED](20)]=count()}'
It's a cat
Hi Malachi,
Malachi de Ælfweald wrote:
> I'm actually wondering the same thing because I have b62 w/ the ZFS
> bits; but need the snapshot's "-r" functionality.
you're lucky, it's already there. From my b62 machine's "man zfs":
zfs snapshot [-r] [EMAIL PROTECTED]|[EMAIL PROTECTED]
Constantin Gonzalez wrote:
Hi,
Our upgrade story isn't great right now. In the meantime,
you might check out Tim Haley's blog entry on using
bfu with zfs root.
thanks.
But doesn't live upgrade just start the installer from the new OS
DVD with the right options? Can't I just do that
Hi,
> Our upgrade story isn't great right now. In the meantime,
> you might check out Tim Haley's blog entry on using
> bfu with zfs root.
thanks.
But doesn't live upgrade just start the installer from the new OS
DVD with the right options? Can't I just do that too?
Cheers,
Constantin
>
>
Our upgrade story isn't great right now. In the meantime,
you might check out Tim Haley's blog entry on using
bfu with zfs root.
http://blogs.sun.com/timh/entry/friday_fun_with_bfu_and
lori
Constantin Gonzalez wrote:
Hi,
I'm a big fan of live upgrade. I'm also a big fan of ZFS boot. The lat
I'm actually wondering the same thing because I have b62 w/ the ZFS bits;
but need the snapshot's "-r" functionality.
Malachi
On 5/25/07, Constantin Gonzalez <[EMAIL PROTECTED]> wrote:
Hi,
I'm a big fan of live upgrade. I'm also a big fan of ZFS boot. The latter
is
more important for me. And
On 25-May-07, at 10:00 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
What if your HW-RAID-controller dies? in say 2 ye
On Fri, May 25, 2007 at 09:41:05AM +0200, Claus Guttesen wrote:
> >> I have just (re)installed FreeBSD amd64 current with gcc 4.2 with src
> >> from May. 21'st on a dual Dell PE 2850. Does the post-gcc-4-2 current
> >> include all your zfs-optimizations?
> >>
> >> I have commented out INVARIANTS,
Toby Thain wrote:
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configu
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you kn
performance.
In-Reply-To: <[EMAIL PROTECTED]>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Approved: 3sm4u3
X-OpenSolaris-URL:
http://www.opensolaris.org/jive/message.jspa?messageID=123265&tstart=0#123265
> > Or if you do want to use bfu becaus
Hello,
I get a error when i was creating the new zone on ZFS. I´m using Solaris
11/06 118855-36, i have several others machines identical hardware, but only
this is appears this behaviour.
[EMAIL PROTECTED]:/] # fmdump -V
TIME UUID SUNW-MSG-ID
Hi,
I'm a big fan of live upgrade. I'm also a big fan of ZFS boot. The latter is
more important for me. And yes, I'm looking forward to both being integrated
with each other.
Meanwhile, what is the best way to upgrade a post-b61 system that is booted
from ZFS?
I'm thinking:
1. Boot from ZFS
2.
[EMAIL PROTECTED] wrote:
> >> IRIX was much earlier than Solaris; Solaris was pretty late in the 64 bit
> >> game with Solaris 7.
> >
> >And Alpha did not have a real 64 bit port as they did implement ILP64.
> >With ILP64 your application does not really notice that it runs in 64 bits
> >if you on
>Depend on the guarantees. Some RAID systems have built in block
>checksumming.
But we all know that block checksums stored with the blocks do
not catch a number of common errors.
(Ghost writes, misdirected writes, misdirected reads)
Casper
___
zf
> I have just (re)installed FreeBSD amd64 current with gcc 4.2 with src
> from May. 21'st on a dual Dell PE 2850. Does the post-gcc-4-2 current
> include all your zfs-optimizations?
>
> I have commented out INVARIANTS, INVARIANTS_SUPPORT, WITNESS and
> WITNESS_SKIPSPIN in my kernel and recompiled
59 matches
Mail list logo