Re: bind 9.6.2 dnssec validation bug
Ollivier Robert wrote: > Or switch to unbound. ^^^ Cute name, but perhaps a tiny bit misleading as to the product's origin -- the first thing I thought of on seeing a name like that was the FSF. Not this time: although its development was commercially sponsored it is BSD-licensed open source. And no, I have nothing at all to do with either the product or its developers/sponsors -- this is from the press release (where I had expected to find mention of GPL). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Removing all ZFS support from boot process
On Fri, 11 Feb 2011, Daniel O'Connor wrote: This is before the kernel boots, correct? Yep. Can you take a picture of where it hangs? (you will have to host it somewhere though, as the list will reject non text attachments). Here you go: http://galatea.salford.ac.uk/aix502/11022011448.jpg http://galatea.salford.ac.uk/aix502/11022011449.jpg The spinning char can seemingly be in any position when it crashes. It took 5 attempts that time to get to the beastie menu. I suspect it's in the loader and quite possibly it's your BIOS that is at fault, or at the very least there is a nasty interaction with it. Is there an update for the BIOS? Does this happen on other hardware? I suspected BIOS, that's why I was going to get a new motherboard. I've always had problems getting gptzfsboot working on this hardware and there are no more BIOS updates now. That's why I have ufs root, as it only worked intermitantly. Then I wondered what the hell was going on in the loader that took >60s and seemingly touched every drive. I assumed it was FBSD that was tasting all the drives. Cheers. -- Mark Powell - UNIX System Administrator - The University of Salford IT Services, Clifford Whitworth Building, Salford University, Manchester, M5 4WT, UK. Tel: +44 161 295 6843 Fax: +44 161 295 6624 www.pgp.com for PGP key ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Removing all ZFS support from boot process
Hi, On 11 February 2011 11:33, Mark Powell wrote: > On Fri, 11 Feb 2011, Daniel O'Connor wrote: [...] > I suspected BIOS, that's why I was going to get a new motherboard. I've > always had problems getting gptzfsboot working on this hardware and there > are no more BIOS updates now. That's why I have ufs root, as it only worked > intermitantly. > Then I wondered what the hell was going on in the loader that took >60s and > seemingly touched every drive. I assumed it was FBSD that was tasting all > the drives. > Cheers. AFAIK if you have gptzfsboot on your drives it will probe the partitions on your drives, which can take a while. So if you suspect ZFS it might really be an option to replace gptzfsboot with gptboot. I recently changed the configuration of my home server from 1x80GiB SATA HDD (booting & /-pool) and 4x400GiB PATA HDD for zraid-Pool on geli to 2x2TiB SATA HDD with gmirrored /boot and the rest for a geli encrypted zmirror (including /). For me it feels as if it takes a few seconds longer for the loader to appear. The latter uses GPT and labels where possible. HTH Christian ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Removing all ZFS support from boot process
On Fri, 11 Feb 2011, Christian Walther wrote: AFAIK if you have gptzfsboot on your drives it will probe the partitions on your drives, which can take a while. So if you suspect ZFS it might really be an option to replace gptzfsboot with gptboot. I have ufs / so isn't /boot/boot (loaded from slice start) in operation during this crash? Thanks. -- Mark Powell - UNIX System Administrator - The University of Salford IT Services, Clifford Whitworth Building, Salford University, Manchester, M5 4WT, UK. Tel: +44 161 295 6843 Fax: +44 161 295 6624 www.pgp.com for PGP key ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Removing all ZFS support from boot process
On 11/02/2011, at 21:03, Mark Powell wrote: >> Can you take a picture of where it hangs? (you will have to host it >> somewhere though, as the list will reject non text attachments). > > Here you go: > > http://galatea.salford.ac.uk/aix502/11022011448.jpg > http://galatea.salford.ac.uk/aix502/11022011449.jpg > > The spinning char can seemingly be in any position when it crashes. It took > 5 attempts that time to get to the beastie menu. OK.. unfortunately not really much help except confirming that it is in the BIOS/loader somewhere.. >> Is there an update for the BIOS? Does this happen on other hardware? > > I suspected BIOS, that's why I was going to get a new motherboard. I've > always had problems getting gptzfsboot working on this hardware and there are > no more BIOS updates now. That's why I have ufs root, as it only worked > intermitantly. > Then I wondered what the hell was going on in the loader that took >60s and > seemingly touched every drive. I assumed it was FBSD that was tasting all the > drives. I believe the loader does look at the drives the BIOS presents to it, certainly at the very least it tries to find something to boot off :) However, even if it is looking on every disk for partitions it should only take a second or so (unless one of the drives is broken I suppose). I have seen BIOSen not boot reliably when external RAID cards are present.. Generally their quality is quite variable :( -- Daniel O'Connor software and network engineer for Genesis Software - http://www.gsoft.com.au "The nice thing about standards is that there are so many of them to choose from." -- Andrew Tanenbaum GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
FreeBSD 8.2-RC3 missing documentation
Hello, Testing FreeBSD 8.2-RC3 i've seen that documentation with regards to daily_status_zfs_enable in periodic.conf(5) is missing. Something like this could be added: (bool) Set to "YES" if you want to run zpool status -x to check for broken ZFS pools. Regards. Victor. -- La prueba más fehaciente de que existe vida inteligente en otros planetas, es que no han intentado contactar con nosotros. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: tmpfs is zero bytes (no free space), maybe a zfs bug?
On (10/02/2011 16:56), Bruce Cran wrote: > On Wed, 19 Jan 2011 11:09:31 +0100 > Attila Nagy wrote: > > I hope somebody can find the time to look into this, it's pretty > > annoying... > > It's also listed as a bug on OpenSolaris: > http://bugs.opensolaris.org/bugdatabase/view_bug.do;?bug_id=6804661 Could you try my patch I've mentioned above in the thread: http://marc.info/?l=freebsd-fs&m=129735686129438&w=2 I've reproduce test scenario in OpenSolaris bug report and it worked as expected for me. System: amd64, 4GB RAM, ~5GB swap /boot/loader.conf: vm.kmem_size="6G" vfs.zfs.prefetch_disable="1" vfs.zfs.txg.timeout="5" # mount -t tmpfs -o size=$((5*1024*1024*1024)) none /mnt # dd if=/dev/zero of=test bs=1m count=$((3*1024)) # dd if=test of=/dev/zero bs=1m # dd if=test of=/dev/zero bs=1m # dd if=test of=/dev/zero bs=1m top statistics: Mem: 429M Active, 272M Inact, 2889M Wired, 96K Cache, 1328K Buf, 196M Free Swap: 5120M Total, 5120M Free ZFS seems to consume most of RAM # cp test /mnt top statistics: Mem: 2808M Active, 247M Inact, 623M Wired, 104M Cache, 1328K Buf, 5052K Free Swap: 5120M Total, 619M Used, 4501M Free, 12% Inuse ZFS cache has shrinked, swap increased, most of tmpfs remains in memory # df -h /mnt FilesystemSizeUsed Avail Capacity Mounted on tmpfs 5.0G3.0G2.0G60%/mnt Thanks, Gleb. > > -- > Bruce Cran > ___ > freebsd...@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscr...@freebsd.org" ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 8.1 amd64 lockup (maybe zfs or disk related)
Thanks for all the help. I've learned some new things, but haven't fixed the problem yet. > 1) Re-enable both CPU cores; I can't see this being responsible for the > problem. I do understand the concern over added power draw, but see > recommendation (4a) below. I re-enabled all cores but experienced a lockup while running zpool scrub. I was able to run scrub twice with 4 of 6 cores enabled without lockup. Also, when lockup occurs I'm not able to access the debugger with ctrl-alt-esc. Just to keep things straight, since I'm running geli, more cores means more io throughput during a scrub. If I'm not able to use the kernel debugger to diagnose this problem, should I disable it? Could it be a security risk? > 1) Disable the JMicron SATA controller entirely. > > 2) Disable the ATI IXP700/800 SATA controller entirely. > > 3a) Purchase a Silicon Image controller (one of the models I referenced > in my previous mail). Many places sell them, but lots of online vendors > hide or do not disclose what ASIC they're using for the controller. You > might have to look at their Driver Downloads section to find out what > actual chip is used. This is on my todo list, but as of now I'm still running the controllers on the motherboard. I should have the controller replaced by next week. > 3b) You've stated you're using one of your drives on an eSATA cable. If > you are using a SATA-to-eSATA adapter bracket[1][2], please stop > immediately and use a native eSATA port instead. > > Adapter brackets are known to cause all sorts of problems that appear as > bizarre/strange failures (xxx_DMAxx errors are quite common in this > situation), not to mention with all the internal cabling and external > cabling, a lot of the time people exceed the maximum SATA cable length > without even realising it -- it's the entire length from the SATA port > on your motherboard, to and through the adapter (good luck figuring out > how much wire is used there, to the end of the eSATA cable. Native > eSATA removes use of the shoddy adapters and also extends the maximum > cable length (from 1 metre to 2 metres), plus provides the proper amount > of power for eSATA devices (yes this matters!). Wikipedia has > details[3]. > > Silicon Image and others do make chips that offer both internal SATA and > an eSATA port on the same controller. Given your number of disks, you > might have to invest in multiple controllers. My motherboard has an eSATA port and that's what I'm using (not an extension bracket) Do you still recommend against it? I figured one fewer drive in the case would reduce the load on my PSU. > 4a) Purchase a Kill-a-Watt meter and measure exactly how much power your > entire PC draws, including on power-on (it will be a lot higher during > power-on than during idle/use, as drives spinning up draw lots of amps). > I strongly recommend the Kill-a-Watt P4600 model[4] over the P4400 model. > Based on the wattage and amperage results, you should be able to > determine if you're nearing the maximum draw of your PSU. Kill-a-Watt meter arrived today. It looks like during boot it's not exceeding 200 watts. During a zpool scrub it gets up to ~255 watts (with all cores enabled). So I don't think the problem is gross power consumption. > 4b) However, even if you're way under-draw (say, 400W), the draw may not > be the problem but instead the maximum amount of power/amperage/whatever > a single physical power cable can provide. I imagine to some degree it > depends on the gauge of wire being used; excessive use of Y-splitters to > provide more power connectors than the physical cable provides means > that you might be drawing too much across the existing gauge of cable > that runs to the PSU. I have seen setups where people have 6 hard disks > coming off of a single power cable (with Y-splitters and molex-to-SATA > power adapters) and have their drives randomly drop off the bus. Please > don't do this. Yes this seems like it could be a problem. I'll shutdown and figure out which drives are connected to which cables. Maybe with some rearranging I can even out the load. Even if I have a bunch of drives on a single cable, would a voltage drop on one cable filled with drives be enough to lockup the machine? It seems like the motherboard power would be unaffected. > A better solution might be to invest in a server-grade chassis, such as > one from Supermicro, that offers a hot-swap SATA backplane. The > backplane provides all the correct amounts of power to the maximum > number of disks that can be connected to it. Here are some cases you > can look at that[5][6][7]. Also be aware that if you're already using a > hot-swap backplane, most consumer-grade ones are complete junk and have > been known to cause strange anomalies; it's always best in those > situations to go straight from motherboard-to-drive or card-to-drive. This would be nice, but it's not in my budget right now. I'll keep it in mind for my next
Re: 8.1 amd64 lockup (maybe zfs or disk related)
On Fri, Feb 11, 2011 at 07:24:27PM -0800, Greg Bonett wrote: > Thanks for all the help. I've learned some new things, but haven't fixed > the problem yet. > > > 1) Re-enable both CPU cores; I can't see this being responsible for the > > problem. I do understand the concern over added power draw, but see > > recommendation (4a) below. > > I re-enabled all cores but experienced a lockup while running zpool > scrub. I was able to run scrub twice with 4 of 6 cores enabled without > lockup. Also, when lockup occurs I'm not able to access the debugger > with ctrl-alt-esc. Just to keep things straight, since I'm running > geli, more cores means more io throughput during a scrub. Okay, and what happens if you disable two cores and re-install the disks you removed? Does the system lock up during "zpool scrub" then? Basically I'm trying to figure out if the problem is related to having 6 cores enabled, or if it's related to having too many disks in use. If it happens in both cases (4 of 6 cores w/ all disks attached, and 6 of 6 cores w/ only some disks attached), then it's probably a motherboard or PSU issue like suspected. > If I'm not able to use the kernel debugger to diagnose this problem, > should I disable it? Could it be a security risk? Let me explain why I advocated adding the debugger to your kernel. Basically if the machine "locks up" you are supposed to try and press Ctrl-Alt-Esc to see if you drop to a db> prompt. If so, the kernel is still alive/working, and the machine actually isn't "hard locked". The debugger is not a security risk. There are only 3 ways (sans serial console, which isn't in use on your system so it doesn't apply) I know of to induce the debugger: 1) execute "sysctl debug.kdb.enter=1" as root, 2) physically press Ctrl-Alt-Esc on the VGA console, 3) crash the machine. > > 3b) You've stated you're using one of your drives on an eSATA cable. If > > you are using a SATA-to-eSATA adapter bracket[1][2], please stop > > immediately and use a native eSATA port instead. > > > > Adapter brackets are known to cause all sorts of problems that appear as > > bizarre/strange failures (xxx_DMAxx errors are quite common in this > > situation), not to mention with all the internal cabling and external > > cabling, a lot of the time people exceed the maximum SATA cable length > > without even realising it -- it's the entire length from the SATA port > > on your motherboard, to and through the adapter (good luck figuring out > > how much wire is used there, to the end of the eSATA cable. Native > > eSATA removes use of the shoddy adapters and also extends the maximum > > cable length (from 1 metre to 2 metres), plus provides the proper amount > > of power for eSATA devices (yes this matters!). Wikipedia has > > details[3]. > > > > Silicon Image and others do make chips that offer both internal SATA and > > an eSATA port on the same controller. Given your number of disks, you > > might have to invest in multiple controllers. > > My motherboard has an eSATA port and that's what I'm using (not an > extension bracket) Do you still recommend against it? I figured one > fewer drive in the case would reduce the load on my PSU. If the eSATA port is on the motherboard backplane (e.g. a port that's soldered to the motherboard), then you're fine. Be aware that the eSATA port may be connected to the JMicron controller, however, which I've already said is of questionable quality to begin with. :-) Is your eSATA enclosure/hard disk powered off of the eSATA cable, or are you using an AC adapter with it? That will determine use of additional load on the PSU. > > 4a) Purchase a Kill-a-Watt meter and measure exactly how much power your > > entire PC draws, including on power-on (it will be a lot higher during > > power-on than during idle/use, as drives spinning up draw lots of amps). > > I strongly recommend the Kill-a-Watt P4600 model[4] over the P4400 model. > > Based on the wattage and amperage results, you should be able to > > determine if you're nearing the maximum draw of your PSU. > > Kill-a-Watt meter arrived today. It looks like during boot it's not > exceeding 200 watts. During a zpool scrub it gets up to ~255 watts > (with all cores enabled). So I don't think the problem is gross power > consumption. And this is with all 6 cores enabled, AND all disks attached, during a "zpool scrub"? If so, I agree the PSU load is not a problem. Voltages and so on could be a problem, but FreeBSD's hardware monitoring support is sub-par when it comes to any system made after ~2002, so you won't get very far monitoring such in the OS. I speak the truth given that I maintain the Supermicro-specific hardware monitoring software (ports/sysutils/bsdhwmon). :-) System BIOSes provide hardware monitoring indicators (voltages, etc.), but voltages will slightly change/shift when running under an OS vs. looking at them in the BIOS. Viewing the BIOS attributes would be worthwhile to verify if,
Re: bind 9.6.2 dnssec validation bug
On Thu, February 10, 2011 2:47 pm, Ollivier Robert wrote: > According to Russell Jackson: > >> Looks like I should just suck it up and start using the bind97 port. >> > > Or switch to unbound. Unless you need/allow recursion for your internal || stealth || seconds/slaves In fact, that's the _only_ reason I haven't already switched to unbound. --Chris > > > -- > Ollivier ROBERT -=- FreeBSD: The Power to Serve! -=- robe...@keltia.freenix.fr > In memoriam to Ondine : http://ondine.keltia.net/ > ___ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" > > -- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: ATI Radeon LW RV200 Mobility 7500 M7 locks up on X exit
On Fri, February 11, 2011 11:12 am, Ted Faber wrote: > For the last couple weeks (maybe more) I've been having an intermittent > problem on my Thinkpad T42 where exiting X causes my screen to lock up and the > system seems to stop doing anything. Lately it's happening about every 3rd > time. > > The usual failure mode is that I select shutdown from the gnome menu and > it logs out with the console showing (text mode), but non responsive. The disk > LED lights intermittently, as can the LAN LED (though sometimes > it comes on solid). Sometimes it sort of shakes itself awake after a minute > or > so, but often the shutdown doesn't complete and I have to force a power cycle > and fsck everything. > > I don't get anything useful in /var/log/messages > > > I run a recent -STABLE > $ uname -a > FreeBSD praxis.lunabase.org 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #62: Sun Feb > 6 18:02:17 PST 2011 r...@praxis.lunabase.org:/usr/obj/usr/src/sys/GENERIC > i386 > > > I've attached a verbose boot dmesg and my xorg.conf, and the > /var/log/Xorg.0.log from a login. > > > Any help would be great. I noticed a potential issue in the output of your attached Xorg.conf. But as I don't have an immediate solution for that, I /will/ offer you some advice based on my experiences with recent versions of Xorg(1) on nVidia based cards. All the docs will advise the following two entries in your rc.cconf(5): hald_enable="YES" dbus_enable="YES" However, _unless_ I use the following, I will _always_ run into some sort of problem; hald_enable="NO" dbus_enable="YES" I have no idea what's going on with hald(8), but frankly, it appears nothing. Research on forums related to issues on nVidia & ATI video cards have many threads that ultimately point at issues using hald(8). Bottom line (for me anyway) has been that if I disable hald(8), I have nearly no (video related) issues. This is both on x86 && amd64 systems. HTH --Chris > > > > -- > http://www.lunabase.org/~faber > Unexpected attachment? http://www.lunabase.org/~faber/FAQ.html#SIG > > -- ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"