This is quite interesting. I'm no scheduler expert, but my understanding
is priority < PUSER won't degrade and is only set in kernel mode after
waking up from a sleep. In user mode, processes should always have priority
p_usrpri >= PUSER, it is obviously not true for a negative nice value:
>
On Tue, 25 Apr 2000 23:00:07 -0700, Doug Barton <[EMAIL PROTECTED]> wrote:
>Anatoly Vorobey wrote:
>
>> Well, *should* we have a built-in "test"? I gather the original ash didn't
>> have it due to the KIS principle. But if it speeds things up considerably,
>> it's not much of a bloat, is it? I'd v
I dropped hints that there may be issues about 3 weeks ago, as my machine
had locked up for apparently no reason, and I had no idea why until
recently. It seems that it has everything to do with running things that
use lots of CPU at a very high priority (I use -20).
I've been struggling for a f
[cc'ing to hackers, to get this archived]
George Cox wrote:
>
> On 26/04 19:29, Daniel C. Sobral wrote:
>
> > BTW, loader reads FAT just fine too, thank you.
>
> I have a related question :-)
>
> Is there any way I can put a FreeBSD kernel on a DOS partition, and load
> it, specifying the roo
In the last episode (Apr 27), Andrew Reilly said:
>
> Because 0.0 might be the closest approximation to whatever
> number you were really trying to divide by that the hardware can
> manage. 0 is never an approximation to 1 or -1.
Aaah, but that assumes you're not also trapping on underflow :)
On Mon, Apr 24, 2000 at 07:28:32PM -0400, Joseph Jacobson wrote:
> See RFC1122, section 3.2.1.3, available at
> http://www.cis.ohio-state.edu/htbin/rfc/rfc1122.html
> http://www.faqs.org/rfcs/rfc1122.html
Right. Assuming we're looking at the same section, it says:
[...]
FYI I tried the xe driver in 4.0-Stable and could not get it to work.
I have a 16bit Xircomn RE-10 ethernet adaptor.
here is what I tried
/etc/pccard.conf
# Xircom CreditCard Ethernet 10/100 + modem (Ethernet part)
card "Xircom" "Ethernet Adapter"
config auto "xe1" 9
insert l
On Wed, Apr 26, 2000 at 07:55:19PM +, Anatoly Vorobey wrote:
> > Unfortunately, the only way to tell for sure would be to do a couple
> > make worlds with the current sh, then do some with super-sh with the
> > built in 'test'.
>
> You are right. I will do it, and report the results.
R
Dan Nelson wrote:
>
> In the last episode (Apr 26), Kent Stewart said:
> > I just noticed that mine isn't showing "Tagged Queueing Enabled" is
> > that something I can set? The adapter is an Adaptec 2940uw.
> >
> > da0 at ahc0 bus 0 target 4 lun 0
> > da0: Fixed Direct Access SCSI-2 device
> >
On Wed 2000-04-19 (16:51), Neil Blakey-Milner wrote:
> I have another idea: We make a sh script named "rcsource" or whatever,
> which we source when we want to have the rc environment, stealing your
> code maliciously:
>
> /--
> sourcercs_sourced_files=
> sourcercs ( ) {
> local rc_conf_
I had added a synchronizing instruction to the MP unlock code in
November after the issue was brought up in the lists and linux folks
thought there might be a synchronization issue.
It turns out that there is no issue. This was refered to me by
Mike Silbersack:
http://ke
> > A modern hard disk can do 10-30 MBytes/sec to/from the platter, assuming
> > no seeks. But the moment it needs to seek the performance drops
> > drastically ... generally down to 1-5 MBytes/sec.
>
> I haven't seen any 30MB/s. The 10K LVD IBM's were just about the
> fastest at 20MB/s co
:In the last episode (Apr 26), Kent Stewart said:
:> I just noticed that mine isn't showing "Tagged Queueing Enabled" is
:> that something I can set? The adapter is an Adaptec 2940uw.
:>
:> da0 at ahc0 bus 0 target 4 lun 0
:> da0: Fixed Direct Access SCSI-2 device
:> da0: 40.000MB/s transfers (
In the last episode (Apr 26), Kent Stewart said:
> I just noticed that mine isn't showing "Tagged Queueing Enabled" is
> that something I can set? The adapter is an Adaptec 2940uw.
>
> da0 at ahc0 bus 0 target 4 lun 0
> da0: Fixed Direct Access SCSI-2 device
> da0: 40.000MB/s transfers (20.000MH
Matthew Dillon wrote:
>
> The standard PCI bus can do 130 MBytes/sec. Even with overhead issues
> (setup for a DMA burst) it can still do 100 MBytes/sec.
But that depends on what is also going on at the same time. There are
three other cards in my PCI bus. You can eliminate one becaus
The standard PCI bus can do 130 MBytes/sec. Even with overhead issues
(setup for a DMA burst) it can still do 100 MBytes/sec.
A standard SCSI controller can do 40, 80, and now even 160 MBytes/sec
over the wire - standard copper cabling w/ LVD connectors (example
below).
On Wed, Apr 26, 2000 at 12:16:51PM -0400, Bill Fumerola wrote:
> On Wed, Apr 26, 2000 at 11:03:45AM -0500, Dan Nelson wrote:
>
> > Why should we treat (1.0/0.0) any differently from (1/0)?
>
> Because Linux has the uncanny ability to both divide by zero and produce
> the shittiest coders the wor
* Mike Smith <[EMAIL PROTECTED]> [000426 10:46] wrote:
> > * Stephen Hocking <[EMAIL PROTECTED]> [000426 09:23] wrote:
> > > Is there any chance of extending the loader so that it can set the memory
> > > size, rather than hard coding it into the kernel config file? This would be
> > > quite use
> * Stephen Hocking <[EMAIL PROTECTED]> [000426 09:23] wrote:
> > Is there any chance of extending the loader so that it can set the memory
> > size, rather than hard coding it into the kernel config file? This would be
> > quite useful for testing things which like a large amount of memory set
> Is there any chance of extending the loader so that it can set the memory
> size, rather than hard coding it into the kernel config file? This would be
> quite useful for testing things which like a large amount of memory set aside
> exclusively for hardware's use (I'm thinking of Utah-GLX's
On Tue, Apr 25, 2000 at 11:00:07PM -0700, Doug Barton wrote:
> Anatoly Vorobey wrote:
>
> > Well, *should* we have a built-in "test"? I gather the original ash didn't
> > have it due to the KIS principle. But if it speeds things up considerably,
> > it's not much of a bloat, is it? I'd volunteer
Narvi wrote:
>
> On Sat, 22 Apr 2000, Matthew Dillon wrote:
>
> [snip]
>
> > disk itself is probably the bottleneck. Disk writes tend to be
> > somewhat slower then disk reads and the seeking alone (between source
> > file and destination file), even when using a large block size
On Wed, Apr 26, 2000 at 11:03:45AM -0500, Dan Nelson wrote:
> Why should we treat (1.0/0.0) any differently from (1/0)?
Because Linux has the uncanny ability to both divide by zero and produce
the shittiest coders the world has ever seen.
--
Bill Fumerola - Network Architect
Computer Horizons
Alfred Perlstein wrote:
>
> * Stephen Hocking <[EMAIL PROTECTED]> [000426 09:23] wrote:
> > Is there any chance of extending the loader so that it can set the memory
> > size, rather than hard coding it into the kernel config file? This would be
> > quite useful for testing things which like a la
In the last episode (Apr 26), Sheldon Hearn said:
> On Tue, 25 Apr 2000 00:05:23 MST, Brooks Davis wrote:
> > > Is FreeBSD's behavior correct? Why or why not? You can use the
> > > included code snippet to verify that this occurs.
> >
> > FreeBSD has traditionaly violated the IEEE FP standard i
* Stephen Hocking <[EMAIL PROTECTED]> [000426 09:23] wrote:
> Is there any chance of extending the loader so that it can set the memory
> size, rather than hard coding it into the kernel config file? This would be
> quite useful for testing things which like a large amount of memory set aside
>
I have just set up IPv6 on my network at home, and there was much
rejoicing. :-)
Now the problem is that legacy apps don't have v6 support. One idea I
have
floating around in my head is the idea of a socks-like combination of
libc
support and faith to allow IPv6-only networks to participate in ip
Is there any chance of extending the loader so that it can set the memory
size, rather than hard coding it into the kernel config file? This would be
quite useful for testing things which like a large amount of memory set aside
exclusively for hardware's use (I'm thinking of Utah-GLX's DMA buff
On Sat, 22 Apr 2000, Matthew Dillon wrote:
[snip]
> disk itself is probably the bottleneck. Disk writes tend to be
> somewhat slower then disk reads and the seeking alone (between source
> file and destination file), even when using a large block size,
> will reduce performanc
Dennis writes:
> Is there support for large mbufs in v4.0? (that is, allocations of any size?)
>
There are 2 ways to use large mbufs:
o options MCLSHIFT= in your kernel config file.
Where XXX is 1 << XXX bytes. Eg, MCLSHIFT=12 is 4K mbuf clusters,
MCLSHIFT=13 is 8k clusters, etc. con
Is there support for large mbufs in v4.0? (that is, allocations of any size?)
Dennis
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message
On Tue, 25 Apr 2000 00:05:23 MST, Brooks Davis wrote:
> > Is FreeBSD's behavior correct? Why or why not? You can use the included
> > code snippet to verify that this occurs.
>
> FreeBSD has traditionaly violated the IEEE FP standard in this regard.
> This is fixed in 5.0 and I think in 4.0
On Tue, 18 Apr 2000 22:53:22 +0100, Brian Somers wrote:
> I'm not sure why sanity won here though. I guess it'll be done the
> next time it comes up
Reason won in the Bourne shell case because ours is actively maintained.
Until ours is no longer actively maintained by a responsive, cluef
33 matches
Mail list logo