<
said:
> On Fri, May 03, 2019 at 12:55:54PM -0400, Garrett Wollman wrote:
>> Does anyone have an easy patch to keep mce(4) from trying to use 9k
>> jumbo mbuf clusters? I think I went down this road once before but
>> the fix wasn't as obvious as it is for the Int
Does anyone have an easy patch to keep mce(4) from trying to use 9k
jumbo mbuf clusters? I think I went down this road once before but
the fix wasn't as obvious as it is for the Intel drivers. (I assume
the hardware is not so broken that it requires packets to be stored in
contiguous physical mem
In article <20180729011153.gd2...@funkthat.com> j...@funkthat.com
writes:
>And I know you know the problem is that over time memory is fragmented,
>so if suddenly you need more jumbo frames than you already have, you're
>SOL...
This problem instantly disappears if you preallocate several gigabyte
In article
r...@ixsystems.com writes:
>I have seen some work in the direction of avoiding larger than page size
>jumbo clusters in 12-CURRENT. Many existing drivers avoid the 9k cluster
>size already. The code for larger cluster sizes in iflib is #ifdef'd out
>so it maxes out at the page size j
I'm commissioning a new NFS server with an Intel dual-40G XL710
interface, running 11.1. I have a few other servers with this
adapter, although not running 40G, and they work fine so long as you
disable TSO. This one ... not so much. On the receive side, it gets
about 600 Mbit/s with lots of ret
< said:
> Pretty sure these problems have been addressed by now, given the amount
> of computers, smart phones, tablets, etc. running with privacy
> extensions enabled.
They've been "fixed" mostly by hiding big networks behind NATs and
leaving them IPv4-only. And in some enterprises by implement
In article <1497408664.2220.3.ca...@me.com>, rpa...@me.com writes:
>I don't see any reason why we shouldn't have privacy addresses enabled
>by default. In fact, back in 2008 no one voiced their concerns.
Back in 2008 most people hadn't had their networks fall over as a
result of MLD listener rep
In article you write:
>Eg, I don't see why we need another tool for some of this missing
>"ethtool" functionality; it seems like most of it would naturally fit
>into ifconfig.
>From the end-user perspective, I agree with Drew. Most of this stuff
should just be part of ifconfig.
>As to other fea
In article
you write:
># ifconfig -m cxgbe0
>cxgbe0: flags=8943
># ifconfig cxgbe0 mtu 9000
>ifconfig: ioctl SIOCSIFMTU (set mtu): Invalid argument
I believe this device, like many others, does not allow the MTU (or
actually the MRU) to be changed once the receive ring has been set up
You may
I noticed that a large number -- but by no means all -- of the packets
captured using libpcap on a netmap'ified ixl(4) interface show up as
truncated -- usually by exactly four bytes. They show up in tcpdump
like this:
18:10:05.348735 IP truncated-ip - 4 bytes missing! 128.30.xxx.xxx.443 >
yyy.y
< said:
> i think it was committed to HEAD but never integrated in the
> stable/10.x branch. I wrote the code in jan/feb 2015.
> I think you can simply backport the driver from head.
So it turned out that this was merged -- along with an Intel driver
update that I needed anyway -- to stable/10 i
I see from various searches that netmap support was added to ixl(4) --
*but* the code isn't there in 10.2. I'd like to be able to use it for
packet capture, because regular BPF on this interface (XL710) isn't
even able to keep up with 2 Gbit/s, never mind 20 Gbit/s. Can anyone
explain what happen
< said:
>> 2) Stopping jails with virtual network stacks generates warnings from
>> UMA about memory being leaked.
> I'm given to understand that's Known, and presumably Not Quite Trivial
> To Fix. Since I'm not starting/stopping jails repeatedly as a normal
> runtime thing, I'm ignoring it. If
The consensus when I asked seemed to be that VIMAGE+jail was the right
combination to give every container its own private loopback
interface, so I tried to build that. I noticed a few things:
1) The kernel prints out a warning message at boot time that VIMAGE is
"highly experimental". Should I
I'm a bit new to managing jails, and one of the things I'm finding I
need is a way for jails to have their own private loopback interfaces
-- so that things like sendmail and local DNS resolvers actually work
right without explicit configuration. Is there any way of making this
work short of going
< said:
> I'm not really (or really not) comfortable with hacking and recompiling
> stuff. I'd rather not change anything in the kernel. So would it help in
> my case to lower my MTU from 9000 to 4000? If I understand correctly,
> this would need to allocate chunks of 4k, which is far more logi
< said:
> - as you said, like ~ 64k), and allocate that way. That way there's no
> fragmentation to worry about - everything's just using a custom slab
> allocator for these large allocation sizes.
> It's kind of tempting to suggest freebsd support such a thing, as I
> can see increasing requirem
<
said:
> There have been email list threads discussing how allocating 9K jumbo
> mbufs will fragment the KVM (kernel virtual memory) used for mbuf
> cluster allocation and cause grief.
The problem is not KVA fragmentation -- the clusters come from a
separate map which should prevent that -- it'
<
said:
> I think your other suggestions are fine, however the problem is that:
> 1) they seem complex for an edge case
> 2) turning them on may tank performance for no good reason if the
> heuristic is met but we're not in the bad situation
I'm OK with trading off performance for one user agai
Here's the scenario:
1) A small number of (Linux) clients run a large number of processes
(compute jobs) that read large files sequentially out of an NFS
filesystem. Each process is reading from a different file.
2) The clients are behind a network bottleneck.
3) The Linux NFS client will issue
In article
<388835013.10159778.1424820357923.javamail.r...@uoguelph.ca>,
rmack...@uoguelph.ca writes:
>I tend to think that a bias towards doing Getattr/Lookup over Read/Write
>may help performance (the old "shortest job first" principal), I'm not
>sure you'll have a big enough queue of outstandin
So is anyone working on an RFC 7217 ("Stable and Opaque IIDs with
SLAAC") implementation for FreeBSD yet?
-GAWollman
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd
In article <201407151034.54681@freebsd.org>, j...@freebsd.org writes:
>Hmm, I am surprised by the m_pullup() behavior that it doesn't just
>notice that the first mbuf with a cluster has the desired data already
>and returns without doing anything.
The specification of m_pullup() is that it re
In article
,
csforge...@gmail.com writes:
>50/27433/0 requests for jumbo clusters denied (4k/9k/16k)
This is going to screw you. You need to make sure that no NIC driver
ever allocates 9k jumbo pages -- unless you are using one of those
mythical drivers that can't do scatter/gather DMA on receiv
I recently put a new server running 9.2 (with a local patches for NFS)
into production, and it's immediately started to fail in an odd way.
Since I pounded this server pretty heavily and never saw the error in
testing, I'm more than a little bit taken aback. We have identical
hardware in productio
In article ,
Peter Wemm quotes some advice about ZFS filesystem vdev layout:
>"1. Virtual Devices Determine IOPS
>IOPS (I/O per second) are mostly a factor of the number of virtual
>devices (vdevs) in a zpool. They are not a factor of the raw number of
>disks in the zpool. This is probably the sing
<
said:
> The patch includes a lot of drc2.patch and drc3.patch, so don't try
> and apply it to a patched kernel. Hopefully it will apply cleanly to
> vanilla sources.
> Tha patch has been minimally tested.
Well, it's taken a long time, but I was finally able to get some
testing. The user whos
<
said:
> I've attached a patch that has assorted changes.
So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major issues. However, I'm
still waiting for my user with 500 VMs to have enough free to be able
to run some real stress tests fo
<
said:
> Basically, this patch:
> - allows setting of the tcp timeout via vfs.nfsd.tcpcachetimeo
> (I'd suggest you go down to a few minutes instead of 12hrs)
> - allows TCP caching to be disabled by setting vfs.nfsd.cachetcp=0
> - does the above 2 things you describe to try and avoid the live
<
said:
> To be honest, I'd consider seeing a lot of non-empty receive queues
> for TCP connections to the NFS server to be an indication that it is
> near/at its load limit. (Sure, if you do netstat a lot, you will occasionally
> see a non-empty queue here or there, but I would not expect to see
In article <513e3d75.7010...@freebsd.org>, an...@freebsd.org writes:
>On 11.03.2013 17:05, Garrett Wollman wrote:
>> Well, I have two problems: one is running out of mbufs (caused, we
>> think, by ixgbe requiring 9k clusters when it doesn't actually need
>> them)
In article
,
jfvo...@gmail.com writes:
>How large are you configuring your rings Garrett? Maybe if you tried
>reducing them?
I'm not configuring them at all. (Well, hmmm, I did limit the number
of queues to 6 (per interface, it appears, so that's 12 in all).)
There's a limit to how much experim
In article <513db550.5010...@freebsd.org>, an...@freebsd.org writes:
>Garrett's problem is receive side specific and NFS can't do much about it.
>Unless, of course, NFS is holding on to received mbufs for a longer time.
Well, I have two problems: one is running out of mbufs (caused, we
think, by
< said:
> Yes, in the past the code was in this form, it should work fine Garrett,
> just make sure
> the 4K pool is large enough.
[Andre Oppermann's patch:]
>> if (adapter->max_frame_size <= 2048)
adapter-> rx_mbuf_sz = MCLBYTES;
>> - else if (adapter->max_frame_size <= 4096)
>> + el
In article <20795.29370.194678.963...@hergotha.csail.mit.edu>, I wrote:
>< said:
>> I've thought about this. My concern is that the separate thread might
>> not keep up with the trimming demand. If that occurred, the cache would
>> grow veryyy laarrggge, with effects like running out of mbuf cluste
<
said:
> around the highwater mark basically indicates this is working. If it wasn't
> throwing away replies where the receipt has been ack'd at the TCP
> level, the cache would grow very large, since they would only be
> discarded after a loonnngg timeout (12hours unless you've changes
> NFSRVC
<
said:
> I suspect this indicates that it isn't mutex contention, since the
> threads would block waiting for the mutex for that case, I think?
No, because our mutexes are adaptive, so each thread spins for a while
before blocking. With the current implementation, all of them end up
doing this
<
said:
> The cached replies are copies of the mbuf list done via m_copym().
> As such, the clusters in these replies won't be free'd (ref cnt -> 0)
> until the cache is trimmed (nfsrv_trimcache() gets called after the
> TCP layer has received an ACK for receipt of the reply from the client).
I
<
said:
> If reducing the size to 4K doesn't fix the problem, you might want to
> consider shrinking the tunable vfs.nfsd.tcphighwater and suffering
> the increased CPU overhead (and some increased mutex contention) of
> calling nfsrv_trimcache() more frequently.
Can't do that -- the system beco
< said:
> Yes, in the past the code was in this form, it should work fine Garrett,
> just make sure
> the 4K pool is large enough.
I take it then that the hardware works in the traditional way, and
just keeps on using buffers until the packet is completely written,
then sets a field on the ring d
< said:
> [stuff I wrote deleted]
> You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce mutex contention on the
NFS server's replay "cache".
> Jumbo pages come directly from the kernel_map which on amd64 is 512GB.
> So KVA shouldn't be a problem. Your pr
< said:
> I am not strongly opposed to trying the 4k mbuf pool for all larger sizes,
> Garrett maybe if you would try that on your system and see if that helps
> you, I could envision making this a tunable at some point perhaps?
If you can provide a patch I can certainly build it in to our kernel
I have a machine (actually six of them) with an Intel dual-10G NIC on
the motherboard. Two of them (so far) are connected to a network
using jumbo frames, with an MTU a little under 9k, so the ixgbe driver
allocates 32,000 9k clusters for its receive rings. I have noticed,
on the machine that is
I'm working on (of all things) a Puppet module to configure NFS
servers, and I'm wondering if anyone expects to implement NFS over
SCTP on FreeBSD.
-GAWollman
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
In article <4c7d02bb.40...@freebsd.org> an...@freebsd.org writes:
>sendto() will not be touched or modified. It's just that on a TCP socket
>the tcp protocol will not perform an implied connect anymore. The only thing
>that changes is TCP dropping a deprecated and experimental extension and
>beh
In article <4a3bf2df.6080...@freebsd.org>, Andre writes:
>2) in old T/TCP (RFC1644) which we supported in our TCP code the SYN/FIN
>combination was a valid one, though not directly intended for SYN/ACK/FIN.
It still is valid, and should be possible to generate using sendmsg()
and MSG_EOF. No
In article ,
Robert Watson writes:
>m_pullup() has to do with mbuf chain memory contiguity during packet
>processing.
Historically, m_pullup() also had one other extremely important
function: to make sure that the header data you were about to modify
was not stored in a (possibly shared) cluster
In article <41d96b7f-f76d-4f35-ba1d-0edf810e6...@young-alumni.com>,
"Chris" writes:
>True OR False
>
>1) NDIS only works with XP drivers.
Can't answer that as I've never needed to try a Vista driver.
>2) NDIS only works with 32-bit drivers and wont work on amd64.
False, unless someone has broke
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] writes:
>static int
>mpls_attach(struct socket *so)
The prototype for a protocol attach functions is
int (*pru_attach)(struct socket *so, int proto, struct thread *td);
(see sys/protosw.h). You don't have to use these arguments, but
< said:
> Garrett Wollman wrote:
>> Am I the only one who would be happier if openssh were not in the base
>> system at all?
> Quite possibly :)
> I don't think it's at all viable to ship FreeBSD without an ssh client
> in this day and age.
If that were w
In article <[EMAIL PROTECTED]>, Brooks
Davis writes:
>On Thu, Jun 12, 2008 at 06:30:05PM -0700, Peter Losher wrote:
>> FYI - HPN is already a build option in the openssh-portable port.
>
>I do think we should strongly consider adding the rest of it to the base.
Am I the only one who would be happ
In article <[EMAIL PROTECTED]>,
Jeff Davis <[EMAIL PROTECTED]> wrote:
>You should see something like "write failed: host is down" and the
>session will terminate. Of course, when ssh exits, the TCP connection
>closes. The only way to see that it's still open and active is by
>writing (or using) a
<
said:
> Probably the problem is largest for latency, especially in benchmarks.
> Latency benchmarks probably have to start cold, so they have no chance
> of queue lengths > 1, so there must be a context switch per packet and
> may be 2.
It has frequently been proposed that one of the deficienc
<
said:
> Right now, at least, it seems to work OK. I haven't tried witness,
> but a non-debug kernel shows a big speedup from enabling it. Do
> you think there is a chance that it could be made to work in FreeBSD?
I did this ten years ago for a previous job and was able to blow out
the stack
< said:
> "Li, Qing" wrote:
>> Ran the packet tests against FreeBSD 5.3 and 6-CURRENT and both
>> respond to the SYN+FIN packets with SYN+ACK.
> This is expected behaviour because of FreeBSD used to implement T/TCP
> according to RFC1644.
Actually, it is expected behavior because FreeBSD used to
< said:
> Signal numbers are typically represented as ints. Is there anything in
> the kernel that prevents me from, say, calling kill(2) with a second
> argument of, say, 0xdeadbeef, in other words any old random int value
> that I might care to use?
Yes. Signals are represented, in the kernel
< said:
> I'm not so happy with a FreeBSD-only "proprietary" thing. Is there any
> proposed RFC work that provides the qualities you want? The advantage
> with T/TCP is that there was a published standard.
T/TCP was a published *non*standard, loudly blazoned "EXPERIMENTAL".
I don't see how Andr
< said:
> I think that it would have to be slightly more complex than that for it to
> be secure. Instead of using syncookie/RFC1948-like generation,
> [...]
HIP! HIP! HIP!!!
-GAWollman
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/m
< said:
> That's it for now... just aio_connect() and aio_accept(). If I think of
> something else, I'll let you know.
[lots of Big Picture(R) stuff elided]
This is certainly an interesting model of program design. However,
some caution is advised. Here are the most significant issues:
- Fre
< said:
> I'm sitting here looking at that man pages for aio_read and aio_write,
> and the question occurs to me: ``Home come there is no such thing as
> an aio_connect function?''
Mostly because there is no need, since connect() doesn't transfer any
data; it just establishes a connection. If t
< said:
> Yes, something in that direction, plus: protocols:
> IPv4, IPv6, TCP, UDP, ICMP, IPX, etc.
> Just about everything as modules.
It is not generally regarded as a good idea to make artificial
boundaries between (e.g.) IP and TCP.
-GAWollman
__
< said:
> Brooks Davis wrote:
>> I'm considering adding an ifconfig -v option that would imply -m and add
>> more details like index, epoch, dname, dunit, etc.
> That would be great!
A particularly relevant feature would give `ifconfig' an option to
emit the current configuration of the interfac
< said:
> 1. Did delay ack time still be detected each 200ms? Which function do
> this job? If not, can anybody help to describe some detail things about
> delay ack time at freebsd source code.
The TCP timer code has been completely rewritten. You can see how it
works now by grepping for `call
<
said:
> What is the difference between Layer2 and Layer3, and what does that
> affect?
"Layer 2 switch" is a fancy name for a bridge.
"Layer 3 switch" is a fancy name for a router.
-GAWollman
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.
< said:
> I believe that sme of the patches were considerred "experimental and
> just lacked someone to make them production quality. In other cases they
> were not against 'current' and porting them to -curren twas left as "an
> exercise for the reader". No-one who had that ime had a need for th
< said:
> routed we support largely out of nostalgia, I guess.
Modern routed does more than just RIP; it's responsible for all sorts
of routing-table management tasks that we mostly just pretend don't
exist (e.g., responding to RTM_LOSING messages).
-GAWollman
__
< said:
> - there seems to be no boundary on how many segments we keep in the
>tcp reassembly queue
I'm not aware of any TCP implementation which ever had such a
limitation. Perhaps all the others implemented something like that in
the past few years and we haven't kept up? (I've certainly
< Can different MTUs be mixed on the same wire
No.
-GAWollman
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
< said:
> I'm considering ways to make sendmsg(2)/recvmsg(2) DTRT, and my
> current candidate is give them a flag bit which says "msg_name has
> both addresses".
Um, they already do the right thing. That's what the IP_RECVDESTADDR
option (and its dual whose name I forget right now) is all about.
< said:
> 1. Do you think it is neccessary to do a htons() on the randomized
> ip_id too? I'd say yes if there is a case where it has to
> monotonically increase afterwards. Does it?
IP IDs are nonces. The only requirement is that they not be reused
for a packet to the same destinatio
< said:
> As long as the chipsets are compliant, an 8 wire straight thru cable
> works as both a straight and a crossover. The GigE standard requires
> this behaviour.
"Crossover" isn't meaningful in the case of GigE: both stations
transmit and receive simultaneously on all four paris.
-GAWollm
< said:
> If I were to tweak the sysctl net.inet.ip.intr_queue_maxlen from its
> default of 50 up, would that possibly help named?
No, it will not have any effect on your problem. The IP input queue
is only on receive, and your problem is on transmit.
The only thing that could possibly help you
< said:
> a lot like it. Right now, it uses the NET_RT_IFLIST sysctl to retrieve
> the interface list; the kernel appends RTM_NEWADDR messages to the
> buffer contents returned by the sysctl to report each address family.
> The function sysctl_iflist() in net/rtsock.c is responsible for this.
The
< said:
> Are there any plans to incorporate SACK in FreeBSD?
We plan to add SACK to FreeBSD whan a compatible implementation is
available.
-GAWollman
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscr
< said:
> The internals of struct device are not contained in
Unfortunately, the internals of `device_t' are. That's why style(9)
discourages such types.
-GAWollman
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd
< said:
> There are a number of situations in which the mbuf allocator is used to
> allocate non-mbufs -- for example, we use mbufs to hold IP fragment
> queues, as well as some static packet prototype mbufs, socket options,
> etc.
You're a few years out of date on that one. Socket options shoul
< said:
> I agree, then... Isn't it already the purpose of RTF_CLONING ?
> When should RTF_PRCLONIG be set ?
RTF_PRCLONING is set automatically by the protocol to cause host
routes to be generated on every unique lookup.
RTF_CLONING is set when the route is added (either manually, or
automatical
< I don't think you ran out of mbufs (you would have noticed) so that
> rules out case #1. Checking cases #2 and #3 requires adding a little
> instrumentation to the driver. If the XL_RXSTAT_UP_ERROR bit is being
> detected in xl_rxeof(), you can print out the status word and see
> if any of the fo
< said:
> What is the BSD equivalent of this Linux call:
> sock=socket(AF_INET,SOCK_PACKET,htons(ETH_P_RARP));
man libpcap
-GAWollman
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any ma
< said:
> How do I find out before I go and buy a usb modem that its going to be
> detected as a umodem or a ugen device.
A priori, you can't. Looking in the Macintosh section will usually
assure you of getting something that is not Windows-specific, although
this is not a sufficient condition.
< said:
> As I understand if the data in the send buffer is bigger than MSS it means
> that TCP stack has some reason not to send it and this reason is not
> TF_NOPUSH flag. Am I wrong ?
If TCP is for some reason prohibited from sending (i.e., the flow
control or congestion control is closed), t
< said:
> Where can I get a list of USB modems supported by BSD
You can't. FreeBSD supports any USB modem that (1) claims in the USB
control protocol to be a modem and (2) doesn't require a firmware
download to make it work. It does not look for specific product
identifiers.
-GAWollman
__
< said:
> always calls tcp_output() when TCP_NOPUSH is turned off. I think
> tcp_output() should be called only if data in the send buffer is less
> than MSS:
I believe that this is intentional. The application had to explicitly
enable TCP_NOPUSH, so if the application disables it explicitly, t
< said:
>> Actually, a proper BSD port would use the net.route.iflist sysctl
>> instead.
> $ uname -sr
> FreeBSD 4.6-RC
> $ sysctl net.route
> sysctl: unknown oid 'net.route'
Irrelevant. sysctl(8) is not equipped to handle the contents of this
MIB branch.
> I think since the ports work agains
< said:
> A proper BSD port could use something like the trick in Stevens[1] and
> keep retrying the call with a larger bufer until the length of the
> result is the same as in the previous call.
Actually, a proper BSD port would use the net.route.iflist sysctl
instead.
-GAWollman
_
< said:
> I am interested in trying to map IP TOS/Diffserv values
> to 802.1p priorities. Some of the switch vendors claim to be
> able to do this.
The priority tag is encoded in the same bitfield as the VLAN tag in
the encapsulation header.
-GAWollman
To Unsubscribe: send mail to [EMAIL PR
< said:
> Wrong.
BZZZT!
> As I stated originally, it's impossible to use 'maxsockbuf' value.
That does not change the fact that an unprivileged user can use up to
`maxsockbuf' bytes of wired kernel memory per socket. That's why the
limit exists. The amount of memory allocated to socket buffer
< said:
> Anyone knows how inetd [internal] services will behave under stress
> situation ?
Very poorly, since they were never intended to be used in that
manner. A purpose-built server will almost invariably handle loads
much better.
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
< said:
> Seriously, you didn't give any alternative. How does one
> knows the maximum allowed limit? By just blindly trying?
Ask for however much you think you actually need, and bleat to the
administrator (or limp along) if you don't get it. Keep in mind that
this is a security-sensitive par
< said:
> Working with Sun JDK network code I have realized a need to provide some
> range checking wrapper for setsockopt() in SO_{SND,RCV}BUF cases. Short
> walk over documentation shown that maximum buffer size is exported via
> kern.ipc.maxsockbuf sysctl. But attempt to use this value as max
< said:
> What is involved?
A huge amount of work: converting the ancient netiso code to use
modern kernel programming interfaces, figuring out MP/MT locking,
adding the netiso support back to the protocol-independent parts of
the kernel, fixing all the warnings, translating all of the anti-DoS
c
< said:
> On the other hand the NetBSD folks don't see it as dead weight
Are you volunteering to do all the work (or pay someone else to do
so)?
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message
< said:
> Has anyone done work to incorporate the ISO networking code
> into FreeBSD? This has been done for NetBSD. It is a required
> component if one wishes to natively support ISO based protocols
> such as IS-IS.
For the limited value that OSI protocols have today, it is a much
better use of
< said:
> Why is rt_refnt decreased so early and not later ?
So long as the route is marked RTF_UP, it cannot be deleted. In a
single-threaded kernel, it is not possible for this code to be
preempted, so there is no means by which the route flags could be
changed. (RTF_UP is unset when and only
< said:
> root@heat[~]% sysctl -a | grep ipf | grep bridge
> net.link.ether.bridge_ipfw: 0
> net.link.ether.bridge_ipf: 0
Grrr... Who's responsible for creating non-protocol nodes under
net.link.ether?
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in
< said:
> My question is do I realy need to fill this? Or is it there just for
> future use?
That depends on what you will be using the length for. Some
interfaces require that it be present; other interfaces (e.g., those
system calls which already take a separate length argument) do not.
-GAW
< said:
> My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
Probably means that your outgoing interface queue is filling up.
ENOBUFS is the only way the kernel has to tell you ``slow down!''.
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net
< said:
> In the meantime absolutely no code has picked up on this idea,
It was copied in spirit from OSF/1.
> The side effect of having some source-files using the _IP_VHL hack and
> some not is that sizeof(struct ip) varies from file to file,
Not so. Any compiler which allocates different a
< said:
> anyone know of an in-kernel traffic generator similar to UDPgen
>
>(http://www.fokus.gmd.de/research/cc/glone/employees/sebastian.zander/private/udpgen/)
>
> for Linux? Userland traffic generators have high overheads with small
> packets at Gigabit speeds.
I wrote one a long time a
< said:
> Accepting incoming T/TCP creates a pretty serious DoS vulnerability,
> doesn't it? The very first packet contains the request, which the
> server must act upon and reply to without further delay. There is no
> 3-way handshake, so a simple attack using spoofed source addresses can
> im
1 - 100 of 247 matches
Mail list logo