James C. McPherson wrote:
Hi Jeff,
Jeff Bonwick wrote:
I've had a "zdb -bv root_pool" running for about 30 minutes now.. it
just finished and of course told me that everything adds up:
This is definitely the delete queue problem:
Blocks LSIZE PSIZE ASIZE avgcomp %Total Type
Hey all,
On Mon, 2006-05-22 at 14:53 +1000, Boyd Adamson wrote:
> > I've tried smreg and wcadmin but do not know the /location/name of
> > the ZFS app to register. Any help is appreciated, google and
> > sunsolve come up empty. On the same note, are there any other apps
> > that can be regis
Hi Darren
Darren J Moffat wrote:
James C. McPherson wrote:
...
I know that zdb is private and totally unstable, but could we get
the manpage for it to at least say what the LSIZE, PSIZE and ASIZE
columns mean please?
See Section 2.6 of the ZFS On-Disk Specification:
-- BEGIN QUOTE --
lsize:
On 5/22/06, Tim Foster <[EMAIL PROTECTED]> wrote:
Hey all,
On Mon, 2006-05-22 at 14:53 +1000, Boyd Adamson wrote:
> > I've tried smreg and wcadmin but do not know the /location/name of
> > the ZFS app to register. Any help is appreciated, google and
> > sunsolve come up empty. On the same note,
Robert Says:
Just to be sure - you did reconfigure system to actually allow larger
IO sizes?
Sure enough, I messed up (I had no tuning to get the above data); So
1 MB was my max transfer sizes. Using 8MB I now see:
Bytes Elapse of phys IO Size
Sent
8 MB; 357
Could anyone confirm that with the recent additions to ZFS, most notably ZFS
version 2, that a ZFS pool created in b37 or older will still be
readable/importable in b38 ZFS version 2 and newer?
If so, is there any serious negative impact on the using an existing ZFS pool
or should the older poo
Wes Williams wrote:
Could anyone confirm that with the recent additions to ZFS, most notably ZFS
version 2, that a ZFS pool created in b37 or older will still be
readable/importable in b38 ZFS version 2 and newer?
If so, is there any serious negative impact on the using an existing ZFS pool
o
On Mon, May 22, 2006 at 07:19:30AM -0700, Wes Williams wrote:
> Could anyone confirm that with the recent additions to ZFS, most
> notably ZFS version 2, that a ZFS pool created in b37 or older will
> still be readable/importable in b38 ZFS version 2 and newer?
Yes. If you do not do an explicit "
Thank you gentlemen for your quick replies.
The ZFS upgrade process sounds like it'll be a snap since I'll simply use the
native ZFS version 2 on my next install/upgrade and simply import my data from
the existing backup pools (prior to ZFS v2).
Keep up the great work!
This message posted
Gregory Shaw writes:
> Rich, correct me if I'm wrong, but here's the scenario I was thinking
> of:
>
> - A large file is created.
> - Over time, the file grows and shrinks.
>
> The anticipated layout on disk due to this is that extents are
> allocated as the file changes. The extent
Apologies if this has been addressed, but looking at some of the sun
blogs and google searches I have not been able to find an answer.
Does ZFS support on write automatic snapshots?
For example, according to defined policy, every time a file is written
a snapshot is created with the diff stored.
Thanks! I will do the below.
I brought it up on the alias, as I thought the problem would be
encountered by a user eventually. They'll want the same information
-- What does the error impact?
On May 22, 2006, at 12:25 AM, Matthew Ahrens wrote:
On Fri, May 19, 2006 at 01:23:02PM -0600, G
Alex Barclay wrote:
Apologies if this has been addressed, but looking at some of the sun
blogs and google searches I have not been able to find an answer.
Does ZFS support on write automatic snapshots?
For example, according to defined policy, every time a file is written
a snapshot is created
I recieved the following from Tim / Sun. Thought I would post it here:
Not sure about the last bit of your question, but in order to register
the ZFS gui, you can do :
# smreg add -a /usr/share/webconsole/zfs
Warning: smreg is obsolete and is preserved only for
compatibility with legacy
Cool, I'll try the tool and for good measure the data I
posted was sequential access (from logical point of view).
As for the physical layout, Idon't know, it's quite
possible that ZFS has layed out all blocks sequentially on
the physical side; so certainly this is not a good way
Darren, thank you for your reply.
While it didn't come out correctly (need to brush up on nomenclature),
I did mean snapshot on closure.
Now if what you really mean is snapshot on file closure I think you
might well be on to something useful. Whats more NTFS has some cool
stuff in this area fo
On Mon, May 22, 2006 at 04:47:07PM +0100, Darren J Moffat wrote:
> Now if what you really mean is snapshot on file closure I think you
> might well be on to something useful. Whats more NTFS has some cool
> stuff in this area for consolidating identical files. The hooks that
> would need to be
Hi Bob,
The Lockhart application console, in which the ZFS web application
sits, was uprev'ed from 2.2.x to 3.0 in build 37. We did not receive
any notice of this from the Lockhart team until bugs started coming in
against the ZFS GUI.
Since then the ZFS GUI has been ported to the new Lockhart 3
Jeff Bonwick wrote:
I've had a "zdb -bv root_pool" running for about 30 minutes now.. it
just finished and of course told me that everything adds up:
This is definitely the delete queue problem:
Blocks LSIZE PSIZE ASIZE avgcomp %Total Type
4.18M 357G222G2
> > 6420204 root filesystem's delete queue is not running
> The workaround for this bug is to issue to following command...
>
> # zfs set readonly=off /
>
> This will cause the delete queue to start up and should flush your queue.
Tabriz,
Thanks for the update. James, please let us know if th
Jeff Bonwick wrote:
6420204 root filesystem's delete queue is not running
The workaround for this bug is to issue to following command...
# zfs set readonly=off /
This will cause the delete queue to start up and should flush your queue.
Tabriz,
Thanks for the update. James, please let us
I updated an i386 system to b39 yesterday, and noticed this when
running iostat:
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.00.50.0 10.0 0.0 0.00.00.5 0 0 c0t0d0
0.00.50.0 10.0 0.0 0.00.00.6 0 0 c0t1d0
0.0 65.
22 matches
Mail list logo