Hello,


   I try to build server for = syn flood rezist.. i buy IBM x3650 with 24
   core xeon cpu, 64 gbit Ram, = SSD disk.. Also i plug in 52598 intel
   Intel® 10 Gigabit XF SR Server = Adapter CARD on it.

   I install FreeBSD 9.0 Release version. And i find = ixbge-2.4.4.tar.gz
   driver and i install on = FreeBSD


   I checked  document = and support pages for = intel.. i didnt find any
   improve = performance document for FreeBSD..

   Like this [1]http://www.intel.com/content/www/us/en/ethernet-control   
lers/82575-82576-82598-82599-ethernet-controllers-latency-appl-note.ht
   ml<= /a> (this is good for linux)

   But i dont see any freebsd = documentation..

   I need to improve more better for performance.. = Can you help me
   about this case ? .. Because i know this network card = can handle
   more pps.. (i noticed 600.000 pps without input error with = this
   configuration with 46 byte syn packets my best performance at this    moment 
on open port). but i want to build for 2.000.000 pps syn flood
   = rezisting ...


   I see only 8 irq rx queues = for handling.. but i got 24 core cpu..
   how can assign more cpu for = better performance. (i attached screen
   for 8 queue) ( how can i assign = more then 8 core.. )

   Also Finally = i need tuning for ixgbe-2.4.4 driver or change freebsd
   kernel params for = better performance ? Can anybody = knows about
   that


   dev.ix.0.dropped: = 0

   dev.ix.0.mbuf_defrag_failed: = 0

   dev.ix.0.no_tx_dma_setup: = 0

   dev.ix.0.watchdog_events: = 0

   dev.ix.0.tso_tx: 0

   dev.ix.0.link_irq: = 0

   dev.ix.0.queue0.interrupt_rate: = 0

   dev.ix.0.queue0.txd_head: = 0

   dev.ix.0.queue0.txd_tail: = 0

   dev.ix.0.queue0.no_desc_avail: = 0

   dev.ix.0.queue0.tx_packets: = 0

   dev.ix.0.queue0.rxd_head: = 0

   dev.ix.0.queue0.rxd_tail: = 0

   dev.ix.0.queue0.rx_packets: = 0

   dev.ix.0.queue0.rx_bytes: = 0

   dev.ix.0.queue0.lro_queued: = 0

   dev.ix.0.queue0.lro_flushed: = 0

   dev.ix.0.queue1.interrupt_rate: = 0

   dev.ix.0.queue1.txd_head: = 0

   dev.ix.0.queue1.txd_tail: = 0

   dev.ix.0.queue1.no_desc_avail: = 0

   dev.ix.0.queue1.tx_packets: = 0

   dev.ix.0.queue1.rxd_head: = 0

   dev.ix.0.queue1.rxd_tail: = 0

   dev.ix.0.queue1.rx_packets: = 0

   dev.ix.0.queue1.rx_bytes: = 0

   dev.ix.0.queue1.lro_queued: = 0

   dev.ix.0.queue1.lro_flushed: = 0

   dev.ix.0.queue2.interrupt_rate: = 0

   dev.ix.0.queue2.txd_head: = 0

   dev.ix.0.queue2.txd_tail: = 0

   dev.ix.0.queue2.no_desc_avail: = 0

   dev.ix.0.queue2.tx_packets: = 0

   dev.ix.0.queue2.rxd_head: = 0

   dev.ix.0.queue2.rxd_tail: = 0

   dev.ix.0.queue2.rx_packets: = 0

   dev.ix.0.queue2.rx_bytes: = 0

   dev.ix.0.queue2.lro_queued: = 0

   dev.ix.0.queue2.lro_flushed: = 0

   dev.ix.0.queue3.interrupt_rate: = 0

   dev.ix.0.queue3.txd_head: = 0

   dev.ix.0.queue3.txd_tail: = 0

   dev.ix.0.queue3.no_desc_avail: = 0

   dev.ix.0.queue3.tx_packets: = 0

   dev.ix.0.queue3.rxd_head: = 0

   dev.ix.0.queue3.rxd_tail: = 0

   dev.ix.0.queue3.rx_packets: = 0

   dev.ix.0.queue3.rx_bytes: = 0

   dev.ix.0.queue3.lro_queued: = 0

   dev.ix.0.queue3.lro_flushed: = 0

   dev.ix.0.queue4.interrupt_rate: = 0

   dev.ix.0.queue4.txd_head: = 0

   dev.ix.0.queue4.txd_tail: = 0

   dev.ix.0.queue4.no_desc_avail: = 0

   dev.ix.0.queue4.tx_packets: = 0

   dev.ix.0.queue4.rxd_head: = 0

   dev.ix.0.queue4.rxd_tail: = 0

   dev.ix.0.queue4.rx_packets: = 0

   dev.ix.0.queue4.rx_bytes: = 0

   dev.ix.0.queue4.lro_queued: = 0

   dev.ix.0.queue4.lro_flushed: = 0

   dev.ix.0.queue5.interrupt_rate: = 0

   dev.ix.0.queue5.txd_head: = 0

   dev.ix.0.queue5.txd_tail: = 0

   dev.ix.0.queue5.no_desc_avail: = 0

   dev.ix.0.queue5.tx_packets: = 0

   dev.ix.0.queue5.rxd_head: = 0

   dev.ix.0.queue5.rxd_tail: = 0

   dev.ix.0.queue5.rx_packets: = 0

   dev.ix.0.queue5.rx_bytes: = 0

   dev.ix.0.queue5.lro_queued: = 0

   dev.ix.0.queue5.lro_flushed: = 0

   dev.ix.0.queue6.interrupt_rate: = 0

   dev.ix.0.queue6.txd_head: = 0

   dev.ix.0.queue6.txd_tail: = 0

   dev.ix.0.queue6.no_desc_avail: = 0

   dev.ix.0.queue6.tx_packets: = 0

   dev.ix.0.queue6.rxd_head: = 0

   dev.ix.0.queue6.rxd_tail: = 0

   dev.ix.0.queue6.rx_packets: = 0

   dev.ix.0.queue6.rx_bytes: = 0

   dev.ix.0.queue6.lro_queued: = 0

   dev.ix.0.queue6.lro_flushed: = 0

   dev.ix.0.queue7.interrupt_rate: = 0

   dev.ix.0.queue7.txd_head: = 0

   dev.ix.0.queue7.txd_tail: = 0

   dev.ix.0.queue7.no_desc_avail: = 0

   dev.ix.0.queue7.tx_packets: = 0

   dev.ix.0.queue7.rxd_head: = 0

   dev.ix.0.queue7.rxd_tail: = 0

   dev.ix.0.queue7.rx_packets: = 0

   dev.ix.0.queue7.rx_bytes: = 0

   dev.ix.0.queue7.lro_queued: = 0

   dev.ix.0.queue7.lro_flushed: = 0


   Here my driver, 

   bsd# dmesg | grep = ix

   module_register: module pci/ixgbe already = exists!

   Module pci/ixgbe failed to register: = 17

   module_register: module pci/ixv already = exists!

   Module pci/ixv failed to register: = 17

   acpi0: Power Button = (fixed)

   ix0: <Intel(R) PRO/10GbE PCI-Express Network = Driver, Version -
   2.4.4> port 0x2000-0x201f mem    
0x9ba40000-0x9ba5ffff,0x9ba00000-0x9ba3ffff,0x9ba60000-0x9ba63fff irq
   30 = at device 0.0 on pci31

   ix0: Using MSIX interrupts with 9 = vectors

   ix0: RX Descriptors exceed system mbuf max, using = default instead!

   ix0: Ethernet address: = 00:1b:21:cb:57:96

   ix0: PCI Express Bus: Speed 2.5Gb/s Width = x8


   Also here my = /boot/loader.conf


   gaze# cat = /boot/loader.conf

   #Useful when your software uses select() instead = of kevent/kqueue or
   when you under DDoS

   # DNS accf available on = 8.0+

   accf_data_load="YES" 
   accf_http_load="YES"<= /p>


   # Async IO system = calls

   aio_load="YES"


   # Intel 52598 load = driver

   ixgbe_load="YES"
   <= p class=MsoNormal>

   # Load = cubic

   cubic_load="YES"
   <= p class=MsoNormal>

   #hw.bce.rxd=2048

   #hw.igb.rxd=2048

   #hw.bce.tso_enable=0

   #hw.pci.enable_msix=0

   #net.isr.direct_force=1

   #net.isr.direct=1

   #net.isr.maxthreads=12    &nb= sp;         # Max number of = threads
   for NIC IRQ balancing (4 cores in box)

   #net.isr.numthread=12



   autoboot_delay="3"   &nb= sp;           &nbs= p;    # reduce boot
   menu delay from 10 to 3 = seconds

   #if_bce_load="YES"   &nb= sp;           &nbs= p;    # load the
   Myri10GE kernel module on = boot

   loader_logo="beastie"   =             &= nbsp; # old FreeBSD logo
   menu

   #net.inet.tcp.syncache.hashsize=1024   = # syncache hash size

   #net.inet.tcp.syncache.bucketlimit=100 # = syncache bucket limit

   #net.inet.tcp.tcbhashsize=4096   &= nbsp;     # tcb hash size

   net.isr.bindthreads=0    &nbs= p;            # = do not bind threads
   to CPUs

   net.isr.direct=1     &nb= sp;           &nbs= p;    # interrupt
   handling via multiple = CPU

   net.isr.direct_force=1    &nb= sp;           # = "

   net.isr.maxthreads=16    &nbs= p;         # Max number of = threads
   for NIC IRQ balancing (4 cores in box)

   hw.pci.enable_msix=1

   hw.ix.rx_process_limit=1000000<= /p>

   hw.igb.rx_process_limit=1000000
   hw.em.rxd=2048

   hw.igb.rxd=2048


   And my = /etc/sysctl.conf


   # = release/9.0.0/etc/sysctl.conf 112200 2003-03-13 18:43:50Z mux = $

   #

   #  This file is read when going to multi-user = and its contents piped
   thru

   #  ``sysctl'' to = adjust kernel values.  ``man 5 sysctl.conf'' for    
details.

   #


   # Uncomment this to = prevent users from seeing information about
   processes = that

   # are being run under another = UID.

   #security.bsd.see_other_uids=0<= /p>


   # INCREASE BUFFER SIZE OF = CONNECTIONS

   kern.ipc.nmbclusters=2560000


   # Increase Storm = control

   hw.intr_storm_threshold=40000


   # Descrease ACK, SYN-ACK, = FIN-ACK wait time

   net.inet.tcp.msl=5000


   # DROP TCP CONNECTION IF = ON CLOSED PORT

   net.inet.tcp.blackhole=2


   # DROP UDP CONNECTİON IF = ON CLOSED PORT

   net.inet.udp.blackhole=1


   # ICMP RESPONSE LIMITING = Max : 50 after 50 dont RESPONSE

   net.inet.icmp.icmplim=50


   # BACKLOG = QUEUE

   kern.ipc.somaxconn=65535


   # Increase = Sockets

   kern.ipc.maxsockets=20480000


   # Increase Socket = Buffer

   kern.ipc.maxsockbuf=104857600


   # For lower latency you = can decrease scheluer's maximum time slice

   kern.sched.slice=1


   # Every socket is a life, = so increase them

   kern.maxfiles=20480000

   kern.maxfilesperproc=200000000<= /p>

   kern.maxvnodes=20000000


   # Incrase = buffers

   net.inet.tcp.recvspace=65536000
   net.inet.tcp.recvbuf_max=1048576000

   net.inet.tcp.recvbuf_inc=6553500

   net.inet.tcp.sendspace=32768000
   net.inet.tcp.sendbuf_max=2097152000

   net.inet.tcp.sendbuf_inc=8192000


   # Also timestamp field is = useful when using syncookies

   net.inet.tcp.rfc1323=1


   # If you set it there is = no need in TCP_NODELAY sockopt (see man
   tcp)

   net.inet.tcp.delayed_ack=0
   <= p class=MsoNormal>

   # Turn off receive = autotuning

   # You can play with it.

   #net.inet.tcp.recvbuf_auto=0

   #net.inet.tcp.sendbuf_auto=0


   # We assuming we have very = fast clients

   #net.inet.tcp.slowstart_flightsize=100
   #net.inet.tcp.local_slowstart_flightsize=100


   # For outgoing connections = only. Good for seed-boxes and ftp
   servers.

   net.inet.ip.portrange.first=1024

   net.inet.ip.portrange.last=65535



   # stops route cache = degregation during a high-bandwidth flood

   #net.inet.ip.rtexpire=2

   net.inet.ip.rtminexpire=2

   net.inet.ip.rtmaxcache=1024
   

   # = Security

   net.inet.ip.sourceroute=0

   net.inet.ip.accept_sourceroute=0

   net.inet.icmp.maskrepl=0

   net.inet.icmp.log_redirect=0

   net.inet.tcp.drop_synfin=1
   <= p class=MsoNormal>

   # = Security

   net.inet.ip.redirect=0

   net.inet.ip.sourceroute=0

   net.inet.ip.accept_sourceroute=0

   net.inet.icmp.log_redirect=0


   # Max number of timewait = sockets (Maximum number of compressed TCP
   TIME_WAIT = entries)

   net.inet.tcp.maxtcptw=20000000<= /p>



   # FIN_WAIT_2 state fast = recycle

   net.inet.tcp.fast_finwait2_recycle=1


   # Time before tcp = keepalive probe is sent

   # default is 2 hours = (7200000)

   net.inet.tcp.keepidle=60000
   

   # Should be increased = until net.inet.ip.intr_queue_drops is zero

   net.inet.ip.intr_queue_maxlen=4096



   # enable send/recv = autotuning

   net.inet.tcp.sendbuf_auto=1
   
   net.inet.tcp.recvbuf_auto=1
   

   # increase autotuning step = size

   net.inet.tcp.sendbuf_inc=16384 
   net.inet.tcp.recvbuf_inc=524288 

   # turn off inflight = limitting

   #net.inet.tcp.inflight.enable=0

   # set this on = test/measurement hosts

   net.inet.tcp.hostcache.expire=1

   # Randomized proccess = id's

   kern.randompid=348


   # do not processes any TCP = options in the TCP headers

   net.inet.ip.process_options=0


   # disable MTU path = discovery

   net.inet.tcp.path_mtu_discovery=0


   # Disable = SACK

   net.inet.tcp.sack.enable=0
   <= p class=MsoNormal>

   kern.ipc.shmmax=5368709120
   <= p class=MsoNormal>kern.ipc.shmall=13107200

   kern.ipc.semmsl=1024


   net.inet6.ip6.auto_linklocal=0<= /p>

   net.inet.tcp.syncookies=0


   # BPF increase buffer = size

   net.bpf.maxbufsize=1048576
   <= p class=MsoNormal>

   kern.ipc.nmbjumbop=262144

   kern.ipc.nmbjumbo16=32000

   kern.ipc.nmbjumbo9=64000


   dev.ix.0.fc=0



   Best = Regards..




   S= eyit = Özgür = Netw= ork Yöneticisi 
   [2]3D"Description:

   = Eski Üsküdar Cad. No:10 VIP Center Kat:7 = İçerenköy
   Ataşehir İstanbul
   t:= 0216 577 33 11=   |   f:= 0216 469 52 43   [3]www.magnetdigital.com
References

   1. 3D"http://www.intel.com/content/www/us/en/ethernet-controllers/82575   2. 
3D"http://www.magnetdigital.com/";
   3. 3D"http://www.magnetdigital.com"/

Reply via email to