> Besides the /etc/system, you could also export all
> the pools, use mdb to
> set the same variable that /etc/system sets, and then
> import the pools
> again. Don't know of any other mechanism to limit
> ZFS's memory foot print.
>
> If you don't do ZFS boot, manually import the pools
> after t
> If ZFS is not beinng used significantly, then ARC
> should not grow. ARC grows
> based on the usage (ie. amount of ZFS files/data
> accessed). Hence, if you are
> sure that the ZFS usage is low, things should be
> fine.
I understand that it won't grow, but I want it to be smaller than the defaul
Hi,
Other than modifying /etc/system, how can I keep the ARC cache low at boot time?
Can I somehow create an SMF service and wire it in at a very low level to put a
fence around ZFS memory usage before other services come up?
I have a deployment scenario where I will have some reasonably large
Off the lists, someone suggested to me that the "Inconsistent
filesystem" may be the boot archive and not the ZFS filesystem (though I
still don't know what's wrong with booting b99).
Regardless, I tried rebuilding the boot_archive with bootadm
update-archive -vf and verified it by mounting it
txg=327816
pool_guid=6981480028020800083
hostid=95693
hostname='opensolaris'
top_guid=5199095267524632419
guid=5199095267524632419
vdev_tree
type='disk'
id=0
guid=5199095267524632419
path='/dev/dsk/c4t0d0s0
>
> On Sat, Mar 22, 2008
> at 11:33 PM, Matt Ingenthron < href="mailto:[EMAIL PROTECTED]">matt.ingenthron@
> sun.com> wrote: class="gmail_quote" style="border-left: 1px solid
> rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex;
> padding-left: 1ex;&
One more scrub later, and now the snapshot I was trying to send,
@laptopmigration, is now showing errors but the errors on the old snapshots are
gone, since I destroyed the snapshots.
Is this expected behavior? Should the errors only show on one snapshot at a
time? I have a suspicion that if
One update to this, I tried a scrub. This found a number of errors on old
snapshots (long story, I'd once done a zpool replace from an old disk with
hardware errors to this disk). I destroyed the snapshots since they weren't
needed. The snapshot I was trying to send did not have any errors.
Hi all,
I'm migrating to a new laptop from one which has had hardware issues lately. I
kept my home directory on zfs, so in theory it should be straightforward to
send/receive, but I've had issues. I've moved the disk out of the faulty
system, though I saw the same issue there.
The behavior
oller card. I've never
measured this or seen it measured-- any pointers would be useful. I
believe the IOs are 8KB, the application is MySQL.
Thanks in advance,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blo
I'm potentially stepping in areas I don't quite know enough about, but
others can jump in if I speak any mistruths :)
More inline...
Georg-W. Koltermann wrote:
Hi,
ok I know zfs-fuse is still incomplete and performance has not been considered,
but still, before I'm going to use it for m
g covers
topics relating to what goes on in his sausage making duties.
- Matt
p.s.: The web says a German word for colloquialism is umgangssprachlich.
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blogs.sun.com/mingenthron
ss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone:
h respect to expanding a filesystem), that's available today, with
the limitation that you can't expand a raidz group itself.
Regards,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs
Mike Seda wrote:
Basically, is this a supported zfs configuration?
Can't see why not, but support or not is something only Sun support can
speak for, not this mailing list.
You say you lost access to the array though-- a full disk failure
shouldn't cause this if you were using RAID-5 on the
subcommand. In your case, you can run,
for instance: "zpool status mypool".
Good luck,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phon
his with
customers in the past, it can be quite a challenge.
Consider yourself lucky that zfs is catching/correcting things!
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blogs.sun.com/mingenthron/
email:
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439
___
mainly) I need along with something
reliable.
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439
__
After some quick experimenting, I determined that it is in fact a single
raidz pool with all 47 devices. Apparently something was either done
wrong or miscommunicated in the process.
Sorry for the bandwidth.
- Matt
Matt Ingenthron wrote:
Hi all,
Sorry for the newbie question, but I
0 0 0
c7t6d0 ONLINE 0 0 0
c7t7d0 ONLINE 0 0 0
errors: No known data errors
Thanks in advance,
- Matt
--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Systems Practice, Client Solutions
http
21 matches
Mail list logo