We should just make netmap on freebsd .. much much nicer and easier to
develop on. :)
-a
On 22 July 2014 21:09, John-Mark Gurney wrote:
> Luigi Rizzo wrote this message on Mon, May 19, 2014 at 04:28 -0400:
>> On Sun, May 18, 2014 at 7:49 PM, Adrian Chadd wrote:
>>
>> > Is there a netmap list
Luigi Rizzo wrote this message on Mon, May 19, 2014 at 04:28 -0400:
> On Sun, May 18, 2014 at 7:49 PM, Adrian Chadd wrote:
>
> > Is there a netmap list that these questions (regardless of OS) could go to?
>
> ???no there isn't one. At the moment there isn't enough traffic
> to suggest that, and
Hi
I am a mac user, when I try to use xtendsan iSCSI initiator to connect native
iSCSI target, I found that the login response pdu do not have
TargetPortalGroupTag key-pair.
xtendsan told me TargetPortalGroupTag is missing and disconnected.
I try to do a workaround to it and it works.
Add s
At Tue, 22 Jul 2014 12:35:22 -0700,
Loganaden Velvindron wrote:
> > usually subjective, and different people may have different opinions.
> > Personally, I often find "ping6 -w" quite useful for debugging
> > purposes, and I think limiting its use to link-local by default gives
>
> Agreed. Perhap
On Tue, Jul 22, 2014 at 11:25:37AM -0700, wrote:
> At Tue, 22 Jul 2014 10:01:50 -0700,
> Loganaden Velvindron wrote:
>
> > > > Security Considerations
> > > >
> > > >This protocol has the potential of revealing information useful to a
> > > >would-be attacker. An implementation of
Well, it shows how easily one can saturate the link. Just use more ports,
the will be saturated as well.
The problem though is that netmap requires that one implements the
forwarding "logic" I think.
Best regards
Andreas
On Tue, Jul 22, 2014 at 9:23 PM, Carlos Ferreira
wrote:
> I think the re
hi!
You can use 'pmcstat -S CPU_CLK_UNHALTED_CORE -O pmc.out' (then ctrl-C
it after say 5 seconds), which will log the data to pmc.out;
then 'pmcannotate -k /boot/kernel pmc.out /boot/kernel/kernel' to find
out where the most cpu cycles are being spent.
It should give us the location(s) inside th
I think the results presented at the paper are regarding one port sending
or receiving at 14.88Mpps. Using several ports at the same time will surely
give much lower results. But then again, if one wants 8, 16, 24 or even
more ports at 10Gb/s, then it should look for FPGA implementations.
On 22 J
On 07/22/2014 01:41 PM, John-Mark Gurney wrote:
> John Jasen wrote this message on Tue, Jul 22, 2014 at 11:18 -0400:
>> Feedback and/or tips and tricks more than welcome.
> You should look at netmap if you really want high PPS routing...
Originally, I assumed an interface supporting netmap was re
At Tue, 22 Jul 2014 10:01:50 -0700,
Loganaden Velvindron wrote:
> > > Security Considerations
> > >
> > >This protocol has the potential of revealing information useful to a
> > >would-be attacker. An implementation of this protocol MUST have a
> > >default configuration that refuse
Hi!
Well, what's missing is some dtrace/pmc/lockdebugging investigations
into the system to see where it's currently maxing out at.
I wonder if you're seeing contention on the transmit paths as drivers
queue frames from one set of driver threads/queues to another
potentially completely different
John Jasen wrote this message on Tue, Jul 22, 2014 at 11:18 -0400:
> Feedback and/or tips and tricks more than welcome.
You should look at netmap if you really want high PPS routing...
>From the netmap paper:
netmap has been implemented in FreeBSD and Linux
for several 1 and 10 Gbit/s network ada
On Tue, Jul 22, 2014 at 09:53:13AM -0700, wrote:
> At Sun, 20 Jul 2014 02:04:10 -0700,
> Loganaden Velvindron wrote:
>
> > Security Considerations
> >
> >This protocol shares the security issues of ICMPv6 that are
> >documented in the "Security Considerations" section of [5].
> >
>
At Sun, 20 Jul 2014 02:04:10 -0700,
Loganaden Velvindron wrote:
> Security Considerations
>
>This protocol shares the security issues of ICMPv6 that are
>documented in the "Security Considerations" section of [5].
>
>This protocol has the potential of revealing information useful to
Feedback and/or tips and tricks more than welcome.
Outstanding questions:
Would increasing the number of processor cores help?
Would a system where both processor QPI ports connect to each other
mitigate QPI bottlenecks?
Are there further performance optimizations I am missing?
Server Descript
Hi!
I'd appreciate a review of this:
http://people.freebsd.org/~adrian/rss/20140722-rss-udp-1.diff
The overview:
* Add a new flag to ip_output() to instruct it to not override the
flowid with the inp cached value. Some forms of UDP transmit will
break this.
* Add new IP socket optio
16 matches
Mail list logo