Out of curiosity, does anyone know at what version you get a warning,
and at what version installgrub is run automatically after upgrading
a root pool/filesystem?
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
After upgrading to zpool version 29/zfs version 5 on a S10 test system via the
kernel patch 144501-19 it will now boot only as far as the to the grub menu.
What is a good Solaris rescue image that I can boot that will allow me to
import this rpool to look at it (given the newer version)?
Thanks
On Jan 30, 2011, at 6:03 PM, Richard Elling wrote:
> On Jan 30, 2011, at 5:01 PM, Stuart Anderson wrote:
>> On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
>>
>>> On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
>>>
>>>> Is it possible to par
How do you verify that a zfs send binary object is valid?
I tried running a truncated file through zstreamdump and it completed
with no error messages and an exit() status of 0. However, I noticed it
was missing a final print statement with a checksum value,
END checksum = ...
Is there any normal
On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
> On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
>
>> Is it possible to partition the global setting for the maximum ARC size
>> with finer grained controls? Ideally, I would like to do this on a per
>> zvol basis
On Jan 30, 2011, at 1:49 PM, Richard Elling wrote:
> On Jan 30, 2011, at 11:19 AM, Stuart Anderson wrote:
>>
>> On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
>>
>>> On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
>>>
>>>> Is ther
Is it possible to partition the global setting for the maximum ARC size
with finer grained controls? Ideally, I would like to do this on a per
zvol basis but a setting per zpool would be interesting as well?
The use case is to prioritize which zvol devices should be fully cached
in DRAM on a serve
On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
> On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
>
>> Is there a simple way to query zfs send binary objects for basic information
>> such as:
>>
>> 1) What snapshot they represent?
>> 2) When they w
Is there a simple way to query zfs send binary objects for basic information
such as:
1) What snapshot they represent?
2) When they where created?
3) Whether they are the result of an incremental send?
4) What the the baseline snapshot was, if applicable?
5) What ZFS version number they where mad
entire drive, but perhaps those do not apply as
significantly to SSD devices?
Thanks
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n we where using
to create our own volumes.
If I remember the sign correctly, the newer firmware creates larger logical
volumes, and you really want to upgrade the firmware if you are going to
be running multiple X25-E drives from the same controller.
I hope that helps.
--
Stuart Anderson ander.
Edward Ned Harvey nedharvey.com> writes:
>
> Allow me to clarify a little further, why I care about this so much. I have
> a solaris file server, with all the company jewels on it. I had a pair of
> intel X.25 SSD mirrored log devices. One of them failed. The replacement
> device came with a
On Oct 2, 2009, at 11:54 AM, Robert Milkowski wrote:
> Stuart Anderson wrote:
>>
>> On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote:
>>
>>> Stuart Anderson wrote:
>>>> I am wondering if the following idea makes any sense as a way to get
On Dec 17, 2009, at 9:21 PM, Richard Elling wrote:
> On Dec 17, 2009, at 9:04 PM, stuart anderson wrote:
>>
>> As a specific example of 2 devices with dramatically different performance
>> for sub-4k transfers has anyone done any ZFS benchmarks between the X25E and
>
> On Wed, Dec 16 at 7:35, Bill Sprouse wrote:
> >The question behind the question is, given the
> really bad things that
> >can happen performance-wise with writes that are not
> 4k aligned when
> >using flash devices, is there any way to insure that
> any and all
> >writes from ZFS are 4k alig
ta data
Metadata is usually only a small percentage.
Sparse-ness is not a factor here. Sparse just means we ignore the
reservation so you can create a zvol bigger than what we'd normally
allow.
Cindy
On 10/17/09 13:47, Stuart Anderson wrote:
What does it mean for the reported value of a zvol vo
) * compresratio (11.20) = 166907917926
which is 3.6% larger than volsize.
Is this a bug or a feature for sparse volumes? If a feature, how
much larger than volsize/compressratio can the actual used
storage space grow? e.g., fixed amount overhead and/or
fixed percentage?
Thanks.
--
Stuart Anderson
On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote:
Stuart Anderson wrote:
I am wondering if the following idea makes any sense as a way to
get ZFS to cache compressed data in DRAM?
In particular, given a 2-way zvol mirror of highly compressible
data on persistent storage devices, what
but
unavailable?
Note, this Gedanken experiment is for highly compressible (~9x)
metadata for a non-ZFS filesystem.
Thanks.
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs
, 2009, at 7:31 PM, Stuart Anderson wrote:
This is S10U7 fully patched and not open solaris, but I would
appreciate any
advice on the following transient "Permanent error" message generated
while running a zpool scrub.
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.c
I just ran zpool scrub on an active pool on an x4170 running S10U7 with the
latest patches and iostat immediately dropped to 0 for all the pool devices and
all processes associated with that device where hard locked, e.g., kill -9 on a
zpool status processes was ineffective. However, other zpool
> > > Question :
> > >
> > > Is there a way to change the volume blocksize
> say
> > via 'zfs snapshot send/receive'?
> > >
> > > As I see things, this isn't possible as the
> target
> > volume (including property values) gets
> overwritten
> > by 'zfs receive'.
> > >
> >
> > By default, proper
> > Question :
> >
> > Is there a way to change the volume blocksize say
> via 'zfs snapshot send/receive'?
> >
> > As I see things, this isn't possible as the target
> volume (including property values) gets overwritten
> by 'zfs receive'.
> >
>
> By default, properties are not received. To p
NE 0 0 0
errors: No known data errors
Thanks.
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
E 0 0 0
c3t0d0 ONLINE 0 0 0
spares
c6t0d0INUSE currently in use
errors: No known data errors
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson > wrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I am certainly not going back to
UFS, it seems like it should be at le
On Jun 21, 2009, at 8:57 PM, Richard Elling wrote:
Stuart Anderson wrote:
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
wow, that is impressive. There is zero chance of
, e.g.,
adding a faster cache device for reading and/or writing?
I am also curious if anyone has a prediction on when the
snapshot-restarting-resilvering bug will be patched in Solaris 10?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6343667
Thanks.
--
Stuart Anderson ander
example exactly how to
interpret the the various numbers from ls, du, df, and zfs used/refernced/
available/compressratio in the context of compression={on,off}, possibly
also refering to both sparse and non-sparse files?
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anders
On Wed, Apr 16, 2008 at 10:09:00AM -0700, Richard Elling wrote:
> Stuart Anderson wrote:
> >On Tue, Apr 15, 2008 at 03:51:17PM -0700, Richard Elling wrote:
> >
> >>UTSL. compressratio is the ratio of uncompressed bytes to compressed
> >>bytes.
> >>
e thought there was a more efficient way using the already
aggregated filesystem metadata via "/bin/df" or "zfs list" and the
compressratio.
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
t; null characaters that weren't actually written to the disk.
This test was done with a file created with via "/bin/yes | head", i.e.,
it does not have any null characters specifically for this possibility.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~ander
On Mon, Apr 14, 2008 at 05:22:03PM -0400, Luke Scharf wrote:
> Stuart Anderson wrote:
> >On Mon, Apr 14, 2008 at 09:59:48AM -0400, Luke Scharf wrote:
> >
> >>Stuart Anderson wrote:
> >>
> >>>As an artificial test, I created a filesystem with com
On Mon, Apr 14, 2008 at 09:59:48AM -0400, Luke Scharf wrote:
> Stuart Anderson wrote:
> >As an artificial test, I created a filesystem with compression enabled
> >and ran "mkfile 1g" and the reported compressratio for that filesystem
> >is 1.00x even though t
preciate any help in understanding what compressratio means.
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Mar 06, 2008 at 06:25:21PM -0800, David Pacheco wrote:
> Stuart Anderson wrote:
> >
> >It is also interesting to note that this system is now making negative
> >progress. I can understand the remaining time estimate going up with
> >time, but what does it mean f
e average server.
> Or the guys at SLAC have, unbeknownst to you, somehow accelerated your
> Thumper to near the speed of light.
>
> (:-)
>
If true, that would certainly help, since we actually are using these
thumpers to help detect gravitational waves! See, http://www.ligo.caltech.e
On Thu, Mar 06, 2008 at 11:51:00AM -0800, Stuart Anderson wrote:
> I currently have an X4500 running S10U4 with the latest ZFS uber patch
> (127729-07) for which "zpool scrub" is making very slow progress even
> though the necessary resources are apparently available. Currently
d find it convenient if the scrub
completion event was also logged in the zpool history along with the
initiation event.
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discu
> --
> Prabahar.
>
> Stuart Anderson wrote:
> >Thanks for the information.
> >
> >How does the temporary patch 127729-07 relate to the IDR127787 (x86) which
> >I believe also claims to fix this panic?
> >
--
Stuart Anderson
for
> this panic is in temporary state and will be released via SunSolve soon.
>
> Please contact your support channel to get these patches.
>
> --
> Prabahar.
>
> Stuart Anderson wrote:
> >On Mon, Feb 18, 2008 at 06:28:31PM -0800, Stuart Anderson wrote:
> >&g
On Mon, Feb 18, 2008 at 06:28:31PM -0800, Stuart Anderson wrote:
> Is this kernel panic a known ZFS bug, or should I open a new ticket?
>
> Feb 18 17:55:18 thumper1 genunix: [ID 403854 kern.notice] assertion failed:
> arc_buf_remove_ref(db->db_buf, db) == 0, file: ../../commo
:18 thumper1 genunix: [ID 655072 kern.notice] fe8000809c60
genunix:taskq_thread+bc ()
Feb 18 17:55:18 thumper1 genunix: [ID 655072 kern.notice] fe8000809c70
unix:thread_start+8 ()
Feb 18 17:55:18 thumper1 unix: [ID 10 kern.notice]
--
Stuart Anderson [EMAIL PROTECTED]
h
This kernel panic when running "zfs receive" has been solved with
IDR127787-10. Does anyone know when this large set of ZFS bug fixes
will be released as a normal/official S10 patch?
Thanks.
On Sat, Aug 25, 2007 at 07:36:25PM -0700, Stuart Anderson wrote:
> Before I open a new cas
nfo), ASCQ: 0x0, FRU: 0x0
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s. That
system has been up for 2 weeks after disabling NCQ and has not displayed
any disconnected messages since then.
Can anyone confirm that that 125205-07 has solved these NCQ problems?
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
On Wed, Oct 24, 2007 at 10:40:41AM -0700, David Bustos wrote:
> Quoth Stuart Anderson on Sun, Oct 21, 2007 at 07:09:10PM -0700:
> > Running 102 parallel "zfs destroy -r" commands on an X4500 running S10U4 has
> > resulted in "No more processes" errors i
90 4556K 1496K sleep0:27 0.00% zfs
11360 root 1 590 4552K 1492K sleep0:26 0.00% zfs
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jul 16, 2007 at 09:36:06PM -0700, Stuart Anderson wrote:
> Running Solaris 10 Update 3 on an X4500 I have found that it is possible
> to reproducibly block all writes to a ZFS pool by running "chgrp -R"
> on any large filesystem in that pool. As can be seen below in
time() = 1189279453
/13:time() = 1189279453
Is this a known bug with fmd and ZFS?
Thanks.
On Fri, Sep 07, 2007 at 08:55:52PM -0700, Stuart Anderson wrote:
> I am curious why zpool status reports a pool to be in the D
0 0 0
spares
c8t1d0 INUSE currently in use
errors: No known data errors
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
/d3, offset 1645084672, content: kernel
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
x4500gc genunix: [ID 943907 kern.notice] Copyright 1983-2007
Sun Microsystems, Inc. All rights reserved.
On Tue, Jul 17, 2007 at 12:40:16PM -0700, Stuart Anderson wrote:
> On Tue, Jul 17, 2007 at 03:08:44PM +1000, James C. McPherson wrote:
> > >>Log a new case with Sun, and make
ratio 12.39, dump succeeded
rebooting...
# dumpadm
Dump content: kernel pages
Dump device: /dev/md/dsk/d2 (swap)
Savecore directory: /var/crash/x4500gc
Savecore enabled: yes
# ls -laR /var/crash/x4500gc/
/var/crash/x4500gc/:
total 2
drwx-- 2 root root 512 Jul 12 16:26 .
drw
On Tue, Jul 17, 2007 at 02:49:08PM +1000, James C. McPherson wrote:
> Stuart Anderson wrote:
> >Running Solaris 10 Update 3 on an X4500 I have found that it is possible
> >to reproducibly block all writes to a ZFS pool by running "chgrp -R"
> >on any large filesyste
event.
Is this a known issue or should I open a new case with Sun?
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
56 matches
Mail list logo