I have been experimenting with ZFS recently, and have been seeing the
occasional reboot with the above error. A quick google shows that this is
a known problem, but that it should go away if you increase kernel
memory. I have done this, but I am still seeing the problem - and not
uunder high load e
> vm.kmem_size="512M"
> vm.kmem_size_max="512M"
I have similar to this in mine...
vm.kmem_size=629145600
vm.kmem_size_max=629145600
which is about 600 meg - the machine has 2 gig of RAM.
> vfs.zfs.prefetch_disable="1"
> vfs.zfs.arc_max="150M"
> kern.maxvnodes="40"
now these I havent got -
I have an HP dc5750 workstaion and I get the following when I tryand boot
it with ACPI enabled. This happens almost instantly, just after the
processors are detected. I am running 7.0 these days, but I had the same
issue under 6.3 (which I reported in PR 117918, though without
the panic line, which
following up my own email here, but I have compiled the kernel
debugger into my kerenal, and much to my surprise it dooes actualy
drop into the debugger when it panics.
so, what can I do from there to help sort this out ? a quick backtrace
shows the following call sequence:
madt_probe
> MADT is the ACPI table that enumerates APICs. Do you have the offset of
> madt_probe()?
I am sujre I can get it for you - do I need to do anything special
in DDB, or is it just the numbers in the bt that you are after ?
I can make this panic very easily and do whatever is necessary to
get info
> So it appears to be dying here:
>
> (gdb) l *madt_probe+0x119
> 0xc06e7c69 is in madt_probe (/usr/src/sys/i386/acpica/madt.c:241).
> 236 if (xsdt == NULL) {
> 237 if (bootverbose)
> 238 printf("MADT: Failed to map
> So it appears to be dying here:
>
> (gdb) l *madt_probe+0x119
> 0xc06e7c69 is in madt_probe (/usr/src/sys/i386/acpica/madt.c:241).
> 236 if (xsdt == NULL) {
> 237 if (bootverbose)
> 238 printf("MADT: Failed to map
> Just the stack trace offsets.
is all the info you need here?
http://toybox.twisted.org.uk/~pete/acpi_panic.jpg
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAI
> I know 7 has had a lot of work done on locking and ULE but are there
> any other reasons to go for that instead of 6.3? Conversely are there
> any reason which would point away from 7 such as stability issues?
7 is great - very stable, fast, includes ZFS, has gcc 4.0 and is excellent
in my opini
> MADT is the ACPI table that enumerates APICs. Do you have the offset of
> madt_probe()?
I startet copying down the hex, but it turned out to be a more accurate idea
to get my colleague to take a photo with his camera phone. Panic and kdb
'bt' output can be found here:
http://toybox.twisted.or
> The patch would be the same, it tried to fix an issue where if the table is
> longer than the space we are borrowing to map things we could end up with
> problems. I.e. the changes weren't in the RSDT/XSDT path at all, but in the
> common code used to map tables. If you are using RSDT, then
if I;ve chhanged one line of one file, how can I recompile without
going back to the top and doing a 'make buildkernel' so I just recompile
that one file ? It's getting a bit tedious to wait 40 minutes when I've
only chnaged one line - is there a better way ?
cheers,
-pete.
__
So, I have a farm of machines runnign 7.1/amd64, all of which have 16 gig of
memory in them. This afternoon, as an experiment, I altered loader.conf
to have these two lines in it:
vm.kmem_size="1536M"
vm.kmem_size_max="1536M"
This is what I do on machines running ZFS - these machines are not, how
> You've probably reduced kmem_size from the default. I don't set anything on
> my 6 GB amd64 system, and I get:
>
> $ sysctl vm.kmem_size vm.kmem_size_max
> vm.kmem_size: 2061496320
> vm.kmem_size_max: 3865468109
>
> I assume your 16GB system would default to even larger numbers. What values
> d
> I'm running 7-STABLE as of Feb 26 or so. Commit r187466 on Jan 20 bumped up
> kmem_size_max on amd64 to 3.6GB:
>
> http://svn.freebsd.org/viewvc/base?view=revision&revision=187466
M now I am wworried about upgrading to STABLE! ;) I can't
think of a reason why I am seeing what I am seein
> > bce(4) is broken in stable, your best option is to revert to the
> > driver in releng 7.1.
>
> Is anybody working on fixing bce(4) in stable? As far as
> I can see in the repository, nothing happened recently.
> The last commit in releng 7 was in December last year.
I am slightly surprised
> Now istgt is a part of ports. (net/istgt)
> FreeBSD issue is solved by danny's patch.
> After applying the patch, iscontrol can connect to istgt.
I am interested in giving this a try, though not immediately as I
am away from the office at the moment. Do I need to apply a patch
to iscontrol to ma
> I have tested "net/istgt" for couple of days with Windows XP and it
> works more reliable than NetBSD "net/iscsi-target".
> With NetBSD implementation sometimes I lost partition filesystem
> information after disconnecting server from network or rebooting my
> computer.
I am just trying it again
> Thank you for reporting.
...
> I'm very interested in this case.
Well, I just finihsed doing a 'zpool scrub' on the disc image mounted
locally on the machine which was running istgt. That still shows some
errors - but only 36 of them, compares to the 9000+ I get when running the
scrub mounted re
Oh dear doing the test again caused the client machine to panic,
with the following message:
panic: solaris assert: 0 == dmu_buf_hold(os, lr->lr_foid, boff, zgd, &db), file
/usr/src/sys/modules/zfs/../../cddl/controb/opensolaris/uts/common/fs/zfs/zfs_nops.c,
line: 955
(that was copied out b
> I have not read the message in my experience.
> Do you have encountered frequency?
Only once. I did not try it again because that is our work server.
I did two testrs, copying files.
1) Using iscsi-target ... copies OK, but ZFS pool has errors
2) Using istgt - causes the panic described.
The c
> as I supposed in previous message your MB is MicroStar product. So I
> insist that you read thread [1] in freebsd-stable named 'Interrupt
> storm' started by Dan Langille
I (still) have the same problem on my MSI Platinum and having re-read
all of those threads in case I missed somehting,
> did you try to make a kernel with KDB and DDB? DEBUG-kernel? it seems
> that turning off optimization and place a debugging stuff in kernel
> solved my problem.
I turned off all optimisation, but I didnt try compiling in the debuggers,
no. I will give that a try and see if it gets rid of the p
> Is this still correct:
Yup - I made a patch to start the priorities in the middle of the
range and to add a 'prefer-low' to reverse the way it is selected
to get round the problem though. You can try it out if you want...
-pete.
___
freebsd-stable@fre
> If it can apply to 7-stable or 7.1-release I'll be even grateful to
> have it ;)
Find it here: http://www.freebsd.org/cgi/query-pr.cgi?pr=123630
That was relative to 7.0 - I havent tried it on 7.1. I was hopinh
that the changes would be incorporated, but pjd didn't like the solution
and said
> I'll try it and report after the weekend :)
It should be OK - we ran it here on the main database server until
a couple of months ago, which I went back to using vanilla 7.1 in an attempt to
try and track down a bug. I will probably continue to use the patch in
future though - it's very stable,
I admit I was scepticle of this suggestion - but it actually seems to
have worked. COmpiling a straight GENERIC kernel with KDB and DDB
included do seem to have made my irq22 interrupt storms go away.
Certainly I have spent some time trying to provoke the problem and not
managed to make it read it'
> I did this for 2-3 years and it has worked very well. Of course I took
> care of using the right priorities from the start. This setup has now
> been replaced by a ZFS mirror. The resilver time is just so much better :)
How do you find ZFS performance in an asymetric mirror, and does it
seem to
> Absolutely. You really must use a tool that interacts with the database
> to perform the backup. Most commercial DBs have hooks that allow the
> backup routines to call out to custom snapshot facilities. One would
> usually request a backup through the database, which would then freeze IO
> on any request to previously mounted FS. On UFS, mounted NFS file systems
> survive server reboots...
Are you sure anout that ? I have a lot of systems sharing files using UFS
and I see the 'stale NFS file handle' thing if I reboot the server too.
No ZFS involved there.
-pete.
I just upgraded a test server to STABLE from today (good friday, around
mid-day GMT), and though it all came up without errors I could not get
any network connectivity out of the machine. The ethernet uses lagg
to bindle two bce interfaces together - I know here were updates to
the bce drive recent
>You're the second one to report this panic so it's caught our notice as
>something to watch over. One report of a panic like this is potentially
>issues with memory errors or any number of other possibilities given
>this area of code but more than that deserves us paying more attention
>to it.
W
well, I tried to narrow this down a bit by removing lagg from the
equation, but the switch requires LACP to be active on those ports
so I can't test in isolation unfortunately.
Is anyone running 7.2 on a DL360 G5 with working ether ?
-pete.
___
freebsd-
I found a machine wityh identical hardware that wasn't using lagg, and have
tested there to see if bce on it's own works. It does. So noew I am
wondering if...
a) Does something in the new bce changes stop it working ith lagg?
b) Is there something broken in lagg indepeendent of b
Ok, I have now confirmed that the commit mentioned in this email:
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=187270+0+archive/2009/freebsd-stable/20090405.freebsd-stable
ss what breaks lagg using bce on my machines. I did submit a PR about
this, but I can't find it yes (though I dont know how l
> We will, and if we do wind up shipping 7.2-REL with lagg(4) broken
> (there is still time for a fix if we find it fast enough so that's not
> definite yet...) apologies in advance. At least as things stand now it
Well, kind of my fault too for not getting aroiund to testing the driver
for two w
> - bce(4) updated (there is a report that lagg(4) does
> not work after the update, fixing that may need to be
> done as an Errata Notice after the release)
[ this is http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/133756 ]
I just tested again with a csup of RELENG_7_2 and
This is just a quick update about some further investigations on
this. I tested out the patch that Niki Denev kindly sent me which
apparently fixes a length issue when zero copy sockets are not
in use. This did not, however, solve the problem, but as part of
this I ran tcpdump on the bce0 and bce1
> In the past it had been suggested that for zfs tuning, something like
>
> vm.kmem_size_max="1073741824"
> vm.kmem_size="1073741824"
> vfs.zfs.prefetch_disable=1
>
> However doing a simple test with bonnie and dd, there does not seem
> to be very much difference in 4 configs. Am I better off jus
One more test I just managed to do - using bce and lagg in 'failover' mode
works fine, so it would appear that the problem lies with LACP.
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsu
> Basic data on my experience with the xpt_config hang; I have more
> detail if needed, but I doubt anyone will believe it. I'm not even
> sure I do.
I am not sure what you mean by that ... something odd about the hang ?
For what it's worth, I have also seen this - I get (or got) precisely
the sa
Just wondering if there was any update to this ? I seem to
be the only one who actually has the problem, but I have
gone as far as I can trying to diagnose it unless someone
can send me patches to test.
cheers,
-pete.
___
freebsd-stable@freebsd.org mail
> I guess it was fixed in -current in r191923.
That looks like the same length fix patch as I was sent
for RELENG-7. Unfortunately it has no effect on the problem.
http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/133756
-pete.
___
freebsd-stable
> http://www.andric.com/freebsd/zfs13/r192269/zfs_mfc-r192269.diff.bz2
Thanks - am going to gve this a try later. Preseumably if I leave the
pool at the revision it is currently on then I can revert back easily ?
Also, is this the version which no longer requires any tuning parameters
in loader.c
> Many people are happily running an old pool with the new code. I have
> done that in a VM and run load over it just to be certain. The tuning
> still applies to i386. On amd64 vm backpressure works, but may
> actually be too aggressive - shrinking the ARC in favor of the
> inactive pages queue.
> To Pete: Did you overwritten the if_bce.c/if_bcereg.h with previous
> - -STABLE code or was it from 7.1-RELEASE?
I am using code from a csup with the date set to
2009.03.30.23.59.59, which is immediately before the
changes (the one which doesnt work is a csup with
a date of 2009.03.31.23.59.59).
> I get this in dmesg after make installkernel && shutdown -r now, zfs
> pool is not mounted. /usr is on zfs so can't installworld.
I can't help, but thanks for the bug report as it prompted me
to upgarde both kernel and world at the same time. Which works
fine after reboot. If I were you I would
All looking good here - 3rd DNS server is now running new ZFS with an
upgraded pool, and I jsut did a server in the office which is running
a database and filesderver, also upgrading the pool. These are all amd64
systems, and the vm backpressure system seems to work nicely - in that
the kernel memo
> Is there anyone here using ZFS on top of a GELI-encrypted provider on
> hardware which could be considered "slow" by today's standards? What
I run a mirrored zpool on top of a pair of 1TB SATA drives - they are
only 7200 rpm so pretty dog slow as far as I'm concerned. The
CPOU is a dual core Ath
> Ouch, that does indeed sounds quite slow, especially considering that
> a dual core Athlon 6400 is pretty fast CPU. Have you done any
> comparison benchmarks between UFS2 with Softupdates and ZFS on the
Not at all - but, now you have got me curious, I just went to
a completely different system (
> David and I have committed some fixes to 7-STABLE tree, and I think all
> important bce(4) fixes has been merged now. If you have bce(4)
> interfaces PLEASE help us to test them, the simplest way of doing this
> is to sync your code with RELENG_7 and rebuild kernel. [1]
I won't have timer to tr
> That should be the packet length fix at line 5973 of if_bce.c. I did
That's the patch whcih was sent to the PR, yes ? I've tried that and it
doesn't fix it at all for me unfortunately.
> not test the teaming fix myself but another user provided a similar
> packet length fix which he claimed di
Well, I found time to test this today after all - and it does actually
fix the issue - my bce devices now work properly with lagg and lacp.
thankyou!
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-st
> So can we claim that the problem has been solved? Could you please give
> us a copy of output from 'ident /sys/dev/bce/* /sys/dev/mii/miidev
> /sys/dev/mii/brgphy*'?
Yes, it looks very much that way. I've no idea what changed to fix it, but
it is certainly working now on my test machine. I will
> My story is very similar to Pete's.
> http://lists.freebsd.org/pipermail/freebsd-stable/2009-January/047487.html
My problem, which you link to there, tturrned out to be due to ICMP
redirects, and is most definitely fixed in 7.2. So, your problem is
not the same as mine, but some of the tips give
I am looking to deploy a couple of hundered system, which are supposed
to attach to a network and be plug-in-and-go. I am thinking of doing
this with a FreeBSD installation, duplicated onto flash cards, and
dumped into some off-the-sheelt hardware. The questions I, what hardware ?
I've done some r
> I'm not 100% sure, but fairly sure that you'll have a hard time
> finding something that combines the low-power standalone type spec with
> a 64-bit capable processor. Once you get the higher-end processor,
That was my experiense when shopping around yes - annoying as I
don't need anything pa
> http://www.tranquilpc-shop.co.uk/acatalog/T7-330_Barebones.html
Now *that* is very much what I am thinking of - OK, I will need to drop
in a CF->SATA along with the card, but thats not much hassle. 64 bit
and I can add in more RAM than on the other. Thanks, I hadn't realised
the new ATonms did 6
> I've tried the Atom330 (D945GCLF2). It works fine with amd64...however
> it's rather wasted for 64bit considering it maxes out at 2gb. But I
> suppose if you wanted to standardize on amd64 installs, this is good.
Size of memory doesnt bother me really - I made the move to amd64 because
of the
> The new 2tb disk you buy can very often be actually a few sectors
> smaller then the disk you are trying to replace, this in turn will
> lead to zfs not accepting the new disk as a replacement, because it's
> smaller (no matter how small).
Heh - you are in for a pleasent surprise my friend! ;-)
> If this is true, some magic has been done to the FreeBSD port of ZFS,
> because according to SUN documentation is is definitely not supposed
> to be possible.
I just tried it again to make sure I wasn't imagining things - you
can give it a shot yourself using mdconfig to create some drives. It
w
> Haven't had time to test (stuck at work), but I will trust your word
> :) Well, this sounds nice and sensible. I am curious though if there
It's good isn't it ? I just did another test, replacing both drives
wuth smaller ones, and you can't then recursively add an even smaller one
in :-) If you
> All the ZFS tuning guides for FreeBSD (including one on the FreeBSD
> ZFS wiki) have recommended values between 64M and 128M to improve
> stability, so that what I went with. How much of my max kmem is it
> safe to give to ZFS?
If you are on amd64 then don't tune it, it will tune itself. If you
> > I'm not sure how you arrive at this number; even with -CURRENT (on i386,
> > with all debug symbols), I could store about 4 complete kernels on such
> > a filesystem:
> >
> > $ du -hs /boot/kernel*
> > 122M /boot/kernel
>
> atom# du -hs /boot/kernel*
> 205M/boot/kernel
i386: 127 meg
amd
> The problem returns:
> Jul 6 20:12:10 polo kernel: interrupt storm detected on "irq22:";
> throttling interrupt source
Six months is a long gap! I was hooinh the problem had gone away. I
havent seen it on here since I started running 7.2-STABLE, and
before that I made it go away by using a deb
Oh FFS! This morning I sent the following email..
> Six months is a long gap! I was hooinh the problem had gone away. I
> havent seen it on here since I started running 7.2-STABLE, and
> before that I made it go away by using a debug kernel.
...and within an hour of typing that I also started see
> I would say in this case you're *not* giving the entire disk to the
> pool, you're giving ZFS a geom that's one sector smaller than the disk.
> ZFS never sees or can touch the glabel metadata.
Is ZFS happy if the size of it's disc changes underneath it ? I have
expanded a zpool a couple of t
> I have earlier posted my tests with Linux. All tests were done with same CF
> cards and the very same multi-card reader on the same computer. They work on
> Linux. That is, nothing wrong with the multi-card reader, it does it's job
> of signaling well.
What makes you sure that Linux is using
> This sort of mechanism has been suggested before, but the problem you
> described (ports installed "on purpose" becoming a dependency of something
> else) is not an easy one to solve.
I don't earlly see why that is a "problem" tha needs solving. cerrtainl
all I want is to be able to list ports
> I don't have a specific ETA on BETA3 going out the door, except to say that so
> far several architectures have reported back on successful builds, so probably
> quite soon.
Is there any point in making bug reports for BETA2 at this point ? I
only got to try it a coupleof days ago, and had a big
Just testing the new BETA3 of 8.0 - I did a vanilla install
using auto layout and no odd options on my first hard drive.
The installer sees the disc as having 63 sectors per
track. Upon booting the installed OS, however, the device
driver reports 32 sectors per track, as below:
da0 at ciss0 bus 0
> ZFS includes support for RAID0 (stripe), RAID1 (mirroring), RAID5 and RAID6
> (raidz1/raidz2), and (soon in OpenSolaris) RAID7 (raidz3). Why would you
> want to build a pool out of devices that are already RAID'd together?
Because gmirror type RAIDing is more appropriate for your application
th
[ originally sent to geom, but am throwing it open to a wider
audience as I didn;t get any replies there]
I am using 7.2-STABLE from October 7th on all amchines, but this
has been going on a while. Very simply I am mirroring together a pair
of discs, one local, one remote. The remote disc is acces
> Just a wild guess, have you tried to set kern.geom.mirror.timeout to a
> higher value?
Yes, I tried values all the way up to 600, no effect at all - plus the
failure comes way before that timeout value (which is in seconds I assume).
-pete.
___
freeb
just about to build a new ZFS based system and I was wondering
what the recommended way to dedicate a whole disc to ZFS is
these days. Should I just give it 'da1', 'da2' etc as I have
done in the past, or is it better to use GPT to create a
partition over the whole disc, which is marked as
being fo
> Have you done any sockets tuning?
> In an older posting the following values were recommended:
Yes, I need that to get the speed out of it for normal use
to a disc on a machine on the same ether - but even so, surely
it should block on a slow disc, not just abandon the mirroring ?
-pete.
_
I recently updates a system to -STABLE, which ahdnt been updates in
9 months or so. It started locking up at 3am every day. I have
updated to -STABLE form this mornign and verified that this still happens.
The porobel is the 'periodic daily' process. I have veriied this
by running it by hand. Wha
Followgin up my own porst - I did some investigaing, and
this is caused by the security scriots, which are now increasing wired
pages a lot. This appears to push normal procsses out into
swap until swap fills. Disabling the secuiryt scripts and the system
no longer crashes.
So, what changes betwe
Just updated a machine to stable r297184 and I am now seeing
the non terninating services problem decribed here:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208132
A quick 'svn log' on /usr/src/bin/csh/config_p.h shows that it too
has the patch which wa sin 1.3, but this has not been everte
> You message lead me to a though that I could use iSCSI to replicate the
> zfs pool from node1 to both iSCSI-provided disk on a node2 in a 4-way
> mirror, right ? Are there any obvious obstacles to this, that I don't
> see, considering the bandwith will be enough ?
I have spent a long time doing
I upgraded my local machine to the above revision a few days ago. Since then I
have seen the local em0 card locking up and getting he following
messages in dmesg
em0: Watchdog timeout -- resetting
em0: link state changed to DOWN
em0: link state changed to UP
I thought it was the physical card,
> > I thought it was the physical card, but I have swapped this out
> > for a completely different one and the problems remain.
>
> What does pciconf -lvb display for the PCI IDs for this card ?
I have just dropped a third card into the machine 9I need
to get some work done unfortnately) - for thi
Bit more testing on this, and I think it was a false alarm and my problem
was actually hardware related. Moving to a different PCI slot
stopped the problem happening. The original ethernet card is now
dead however, and the slow has burn marks on it - which makes me
think ,it isnt software really ;)
Ok, thats a bit worry if true - but I can confirm that l2arc works fine
under 10.3 on amd64, so what you say about cross-compling might be true.
Am taking an inetrest in this as I have just dpeloyed a lot of machines
which are going to be relying on l2arc working to get reasobale performance.
-pe
Have ignored this thread untiul now, but I observed the same behaviour
on mysystems over the last week or so. In my case its an exim spool
directory, which was hugely full as some point (thousands of
files) and now takes an awfully long time to open and list. I delet
and remake them and the problem
> > I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one
> > RAID0 for each disk.
> >
> Not in my case, both pool disks are attached to the Intel ICH7 SATA300
> controller.
Nor my case - my discs are on this:
ahci0:
___
freebsd
> In bad case metadata of every file will be placed in random place of disk.
> ls need access to metadata of every file before start of output listing.
Umm, are we not talkong abut an issue where the directoyr no longer contains
any files. It used to have lots, now it has none.
> I.e. in bad case
> Oh, my goodness, how far afield nonsense has gotten! Have all the
> good folks posting in this thread forgotten how directory blocks are
> allocated in UNIX?
Not forgotten, just under the impression that ZFS shrinks directories
unlike good old UFS. Apparenrly not, and yes, if thats true th
So, I am off sick and my colleagues decided to load test our set of five
servers excesively. All ran out of swap. So far so irritating, but whats has
happened is that twoof them now will not boot, as it appears the ZFS pool
they are booting from has become corrupted.
One starts to boot, then crase
> Silly question - have you checked that the swap partition does not
> overlap your boot pool partition? It could well be that the end of
> the swap partition intrudes into the affected ZFS pool
Interesting idea - all partitons were created with gpart add -a 8
but I have explictly checked, and I
> How much trust do you put in your hardware? Have you ever put the
> hardware under full load for extended periods before e.g. run poudriere
> to build pkg repos?
Ah, now that is a good point. There are two drves in each machine, mirrored,
but one of each of the pair is 6 years old. We havent s
> zpool import -N -O readonly=on -f -R /mnt/somezpoool
>
> If that doesn't help try:
>
> zpool import -N -O readonly=on -f -R /mnt/somezpoool -Fn
I got someone to do this (am still having toruble finding time
as am supposed to be off sick) and it causes instant kernel panic
on trying to import the
> When I gte the drives back online on a system I will check this though, thanks
Just took a look and the partitions do not overlap. Thanks for the idea
though...
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/fr
> When you say corrupt what do you mean, specifically what's the output
> from zpool status?
It doesnt get that far - these are the boot pools and it
wont boot, one due to not finding the pool, the other due
to panicing when trying to impport the pool. Attching the discs
to another machine and tr
> At one point lz4 wasn't supported on boot, I seem to remember that may
> have been addressed but not 100% sure.
yes, its been addressed and works fine. Note that these machines booted
fine befroe and I havent chnaged the OS, simply ran a lot
of Apache CGI scripts to force it out of swap, so its
> Instapanic, huh...
>
> Ok, let's put documentation aside and focus on unsupported development
> features.
Hi, sorry for not replying untiltoday, but basivally we got to the point
where getting the machine sp again was more importnat than debugging so
unfortunately I had to clone the drives off
I run an iscsi setup booting using ixpe, which I build on the
FreeBSD server. the last few steps of the build do this:
objcopy -O binary -R .zinfo bin/ipxe.pxe.tmp bin/ipxe.pxe.bin
objcopy -O binary -j .zinfo bin/ipxe.pxe.tmp bin/ipxe.pxe.zinfo
This runs fine on 10-STABLE, but on
I have a pair of machines running hast cross two pairs of
drives - i.e. 4 drives total, two in each box, and hence
two hast resources, cbert0 and cebrt1.
Its been running mysql fir a long time, but there only 6 gig
of data actually in use in the pool. I rebuilt the secondary
machine, and wanted to
Partially answering my own question here, as it occured to
me that the zpool is scattering writes across the disc in 4k blocks,
but hast has a minimum extent size of 2meg by default. Thats eems like
a likely culprit for the 'dirty blocks' multiplcation I am seeing.
I shrunk down the extent size to
I have a pair of machines which I have been runnig HAST on
for a number of years. It works well, it does what Ineed it
to do, and I havent considered the details until recently.
As I udnesratnd it though, the minimum size of block copied is
set by default to something quite arge (2 meg). I see wha
401 - 500 of 789 matches
Mail list logo