Never done any kernel hacking before so I'm just looking
for some pointers. What's needed is a mechanism to
specify a directory (or set of them) and whenever a request
is made for the contents of that directory, if it exists in the
list then what is returned needs to be mangled in some
ways.
Matt Dillon wrote:
> Yah... the test I ran was just a couple of seconds worth of playing
> around over ssh. I expect the worst case to be a whole lot worse.
>
> We're going to have to bump up UPAGES to 3 in 4.x, there's no question
> about it. I'm going to do it tonight.
Heh.
:stack size = 4688
Sep 24 22:47:22 test1 /kernel: process 29144 exit kstackuse 4496
closer... :-)
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message
:
:Matt Dillon wrote:
:> This isn't perfect but it should be a good start in regards to
:> testing kstack use. This patch is against -stable. It reports
:> kernel stack use on process exit and will generate a 'Kernel stack
:> underflow' message if it detects an underflow. It do
Matt Dillon wrote:
> This isn't perfect but it should be a good start in regards to
> testing kstack use. This patch is against -stable. It reports
> kernel stack use on process exit and will generate a 'Kernel stack
> underflow' message if it detects an underflow. It doesn't p
This isn't perfect but it should be a good start in regards to
testing kstack use. This patch is against -stable. It reports
kernel stack use on process exit and will generate a 'Kernel stack
underflow' message if it detects an underflow. It doesn't panic,
so for a fun time
:Oh, one other thing... When we had PCIBIOS active for pci config space
:read/write support, we had stack overflows on many systems when the SSE
:stuff got MFC'ed. The simple act of trimming about 300 bytes from the
:pcb_save structure was enough to make the difference between it working or
:not
Matt Dillon wrote:
> :
> :I did it as part of the KSE work in 5.x. It would be quite easy to do it
> :for 4.x as well, but it makes a.out coredumps problematic.
> :
> :Also, "options UPAGES=4" is a pretty good defensive measure.
> :
> :Cheers,
> :-Peter
> :--
> :Peter Wemm - [EMAIL PROTECTED]; [E
Matt Dillon wrote:
> :
> :I did it as part of the KSE work in 5.x. It would be quite easy to do it
> :for 4.x as well, but it makes a.out coredumps problematic.
> :
> :Also, "options UPAGES=4" is a pretty good defensive measure.
> :
> :Cheers,
> :-Peter
> :--
> :Peter Wemm - [EMAIL PROTECTED]; [E
On Mon, 24 Sep 2001, Matt Dillon wrote:
> Yowzer. How the hell did that happen! Yes, you're right, the
> vm_page_array[] pointer has gotten corrupted. If we assume that
> the vm_page_t is valid (0xc0842acc), then the vm_page_buckets[]
> pointer should be that.
...
> This
:
:I did it as part of the KSE work in 5.x. It would be quite easy to do it
:for 4.x as well, but it makes a.out coredumps problematic.
:
:Also, "options UPAGES=4" is a pretty good defensive measure.
:
:Cheers,
:-Peter
:--
:Peter Wemm - [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Matt Dillon wrote:
>
> :>The pointers in the last few entries of the vm_page_buckets array got
> :>corrupted when an agument to a function that manipulated whatever was next
> :>in ram was 0, and it turned out that it was 0 because
> :> of some PTE flushing thing (you are the one that found it...
Andrew Gallatin wrote:
>
> Matt Dillon writes:
> >
> > :What happens on an ECC equipped PC when you have a multi-bit memory
> > :error that hardware scrubbing can't fix? Will there be some sort of
> > :NMI or something that will panic the box?
> > :
> > :I'm used to alphas (where you'll g
Andrew Gallatin wrote:
>
> What happens on an ECC equipped PC when you have a multi-bit memory
> error that hardware scrubbing can't fix? Will there be some sort of
> NMI or something that will panic the box?
>
> I'm used to alphas (where you'll get a fatal machine check panic) and
> I am just
Matt Dillon writes:
>
> :What happens on an ECC equipped PC when you have a multi-bit memory
> :error that hardware scrubbing can't fix? Will there be some sort of
> :NMI or something that will panic the box?
> :
> :I'm used to alphas (where you'll get a fatal machine check panic) and
>
:What happens on an ECC equipped PC when you have a multi-bit memory
:error that hardware scrubbing can't fix? Will there be some sort of
:NMI or something that will panic the box?
:
:I'm used to alphas (where you'll get a fatal machine check panic) and
:I am just wondering if PCs are as safe.
:
stack can be somewhat sparse depending on execution path, but it's not a
bad idea..
On Mon, 24 Sep 2001, Matt Dillon wrote:
> :In message <[EMAIL PROTECTED]>, Matt Dillon writes:
> :>
> :>Hmm. Do we have a guard page at the base of the per process kernel
> :>stack?
> :
> :As I understa
What happens on an ECC equipped PC when you have a multi-bit memory
error that hardware scrubbing can't fix? Will there be some sort of
NMI or something that will panic the box?
I'm used to alphas (where you'll get a fatal machine check panic) and
I am just wondering if PCs are as safe.
Thanks
:
:In message <[EMAIL PROTECTED]>, Matt Dillon writes:
:>
:>Hmm. Do we have a guard page at the base of the per process kernel
:>stack?
:
:As I understand it, no. In RELENG_4 there are UPAGES (== 2 on i386)
:pages of per-process kernel state at p->p_addr. The stack grows
:down from the t
:In message <[EMAIL PROTECTED]>, Matt Dillon writes:
:>
:>Hmm. Do we have a guard page at the base of the per process kernel
:>stack?
:
:As I understand it, no. In RELENG_4 there are UPAGES (== 2 on i386)
:pages of per-process kernel state at p->p_addr. The stack grows
:down from the top,
In message <[EMAIL PROTECTED]>, Matt Dillon writes:
>
>Hmm. Do we have a guard page at the base of the per process kernel
>stack?
As I understand it, no. In RELENG_4 there are UPAGES (== 2 on i386)
pages of per-process kernel state at p->p_addr. The stack grows
down from the top, and str
not, I believe in 4.x
we do in 5.x
On Mon, 24 Sep 2001, Matt Dillon wrote:
>
> :>The pointers in the last few entries of the vm_page_buckets array got
> :>corrupted when an agument to a function that manipulated whatever was next
> :>in ram was 0, and it turned out that it was 0 because
> :> o
:
:remember that we hit almost this problem with the KSE stuff during
:debugging?
:
:The pointers in the last few entries of the vm_page_buckets array got
:corrupted when an agument to a function that manipulated whatever was next
:in ram was 0, and it turned out that it was 0 because
: of some P
:>The pointers in the last few entries of the vm_page_buckets array got
:>corrupted when an agument to a function that manipulated whatever was next
:>in ram was 0, and it turned out that it was 0 because
:> of some PTE flushing thing (you are the one that found it... remember?)
:
:I think I've a
>
>The pointers in the last few entries of the vm_page_buckets array got
>corrupted when an agument to a function that manipulated whatever was next
>in ram was 0, and it turned out that it was 0 because
> of some PTE flushing thing (you are the one that found it... remember?)
I think I've also s
> Tell me if I am wrong but from the floppy, the files kern.flp &
> mfsroot.flp are compressed and then uncompressed into memory.
>
> If so, that means that the FreeBSD box is running this programs from the
> RAM and not from the floppy, right ?
Correct. They're running with the root device set
remember that we hit almost this problem with the KSE stuff during
debugging?
The pointers in the last few entries of the vm_page_buckets array got
corrupted when an agument to a function that manipulated whatever was next
in ram was 0, and it turned out that it was 0 because
of some PTE flushin
Peter Wullinger wrote:
>
> While at the topic:
>
> Anybody yet thought about a mountable filesystem for sysctl() values?
>
> Would be like Linux-procfs, but a lot cleaner (wth has pci/ to do with /proc?)
You are late... :-) Please see the archives of freebsd-arch (some 3-4
months ago).
The co
:
:In message <[EMAIL PROTECTED]>, Matt Dillon writes:
:>
:>$8 = 58630
:>(kgdb) print vm_page_buckets[$8]
:
:What is vm_page_hash_mask? The chunk of memory you printed out below
:looks alright; it is consistent with vm_page_array == 0xc051c000. Is
:it just the vm_page_buckets[] pointer that is co
I saw a duplicate in one of the capabilities that wer submitted to -bugs earlier.
This had me thinking. What happens when a duplicate capability exists in termcap?
Are there any other duplicates in termcap.src? If yes, which?
The first attachment is a perl script that strips all cruft from term
On 23-Sep-01 Evan Sarmiento wrote:
> Hello,
>
> After compiling a new kernel, installing it, when my laptop
> tries to mount its drive, it panics with this message:
>
> panic: lock (sleep mutex) vnode interlock not locked @
> ../../../kern/vfs_default.c:460
>
> which is:
>
> if (ap->a_
++ 24/09/01 11:30 -0700 - Ulf Zimmermann:
| Seems to have still S1G bug:
|
| Connected to cvsup4.freebsd.org
| Server cvsup4.freebsd.org has the S1G bug
This should go to the maintainer of cvsup4.freebsd.org, available at:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/cvsup.html#CVS
Seems to have still S1G bug:
Connected to cvsup4.freebsd.org
Server cvsup4.freebsd.org has the S1G bug
--
Regards, Ulf.
-
Ulf Zimmermann, 1525 Pacific Ave., Alameda, CA-94501, #: 510-865-0204
To Unsubscribe: send mail to [EMA
Thanks for the responses, as expected it was an operator head space problem.
My lack of understanding how the default queues and bw would make ping
look. Apparently, enough delay is introduced merely by adding a pipe that
the ping client timesout waiting for the reponse. The response was actuall
As a side note, Irix and Solaris provide cachefs for this purpose and use
NFS filesystems as examples (others examples may include CD-ROM, etc).
Charles
-Original Message-
From: David Malone [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 24, 2001 8:26 AM
To: Attila Nagy
Cc: [EMAIL P
> Hello,
>
> | In short, which program gives enough knowledge to the microprocessor (?)
> | and allow him to use kern.flp & mfsroot.flp in order to boot and make the
> | operating system running.
>
> your BIOS reads the first sektor from your floppy which consists
> of a boot loader, which usual
On Mon, 24 Sep 2001, David Malone wrote:
> On Mon, Sep 24, 2001 at 01:07:00PM +0200, Attila Nagy wrote:
> > I'm just curious: is it possible to set up an NFS server and a client
> > where the client has very big (28 GB maximum for FreeBSD?) swap area on
> > multiple disks and caches the NFS expor
On Mon, Sep 24, 2001 at 01:07:00PM +0200, Attila Nagy wrote:
> I'm just curious: is it possible to set up an NFS server and a client
> where the client has very big (28 GB maximum for FreeBSD?) swap area on
> multiple disks and caches the NFS exported data on it?
> This could save a lot of bandwid
Hi,
I have noticed some strange behaviour with 4.3-RELEASE and dump. I have
been dumping my filesystems through gzip into a compressed dumpfile.
Some
of the resulting dumps have been MUCH larger than I would expect.
As an example, I have just dumped my /home partition note that lots
of dir
On Mon, Sep 24, 2001 at 01:07:00PM +0200, Attila Nagy wrote:
> Hello,
>
> I'm just curious: is it possible to set up an NFS server and a client
> where the client has very big (28 GB maximum for FreeBSD?) swap area on
> multiple disks and caches the NFS exported data on it?
> This could save a lo
Hello,
I'm just curious: is it possible to set up an NFS server and a client
where the client has very big (28 GB maximum for FreeBSD?) swap area on
multiple disks and caches the NFS exported data on it?
This could save a lot of bandwidth on the NFS server and also redues load
on that.
Thanks,
-
Ok, here is the second set of results. I didn't run all the tests
because nothing I did appeared to really have much of an effect. In
this set of tests I set MAXMEM to 128M. As you can see the buildworld
took longer verses 512M (no surprise), and vmiodirenable still helped
42 matches
Mail list logo