Re: Outgoing packets being sent via wrong interface
On 27/11/2015 5:13 PM, Daniel Bilik wrote: On Wed, 25 Nov 2015 12:20:33 + Gary Palmer wrote: route -n get As suggested by Kevin and Ryan, I set the router to drop redirects... net.inet.icmp.drop_redirect: 1 ... but it happened again today, and again affected host was 192.168.2.33. Routing and arp entries were correct. Output of "route -n get"... route to: 192.168.2.33 destination: 192.168.2.0 mask: 255.255.255.0 fib: 0 interface: re1 flags: recvpipe sendpipe ssthresh rtt,msecmtuweightexpire 0 0 0 0 1500 1 0 ... has not changed during the problem. Interesting was ping result... PING 192.168.2.33 (192.168.2.33): 56 data bytes ping: sendto: Operation not permitted ping: sendto: Operation not permitted ... 64 bytes from 192.168.2.33: icmp_seq=11 ttl=128 time=0.593 ms ping: sendto: Operation not permitted ... 64 bytes from 192.168.2.33: icmp_seq=20 ttl=128 time=0.275 ms 64 bytes from 192.168.2.33: icmp_seq=21 ttl=128 time=0.251 ms ping: sendto: Operation not permitted ... 64 bytes from 192.168.2.33: icmp_seq=40 ttl=128 time=0.245 ms ping: sendto: Operation not permitted 64 bytes from 192.168.2.33: icmp_seq=42 ttl=128 time=7.111 ms ping: sendto: Operation not permitted ... --- 192.168.2.33 ping statistics --- 46 packets transmitted, 5 packets received, 89.1% packet loss It seems _some_ packets go the right interface (re1), but most try to go wrong (re0) and are dropped by pf... 00:00:01.066886 rule 53..16777216/0(match): block out on re0: 82.x.y.50 > 192.168.2.33: ICMP echo request, id 58628, seq 39, length 64 00:00:02.017874 rule 53..16777216/0(match): block out on re0: 82.x.y.50 > 192.168.2.33: ICMP echo request, id 58628, seq 41, length 64 00:00:02.069634 rule 53..16777216/0(match): block out on re0: 82.x.y.50 > 192.168.2.33: ICMP echo request, id 58628, seq 43, length 64 And again, refreshing default route (delete default / add default) resolved it... PING 192.168.2.33 (192.168.2.33): 56 data bytes 64 bytes from 192.168.2.33: icmp_seq=0 ttl=128 time=0.496 ms 64 bytes from 192.168.2.33: icmp_seq=1 ttl=128 time=0.226 ms 64 bytes from 192.168.2.33: icmp_seq=2 ttl=128 time=0.242 ms 64 bytes from 192.168.2.33: icmp_seq=3 ttl=128 time=0.226 ms next time it happens try flushing the arp table. -- Dan ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org" ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re: Kernel NAT issues
On 27/11/2015 12:55 PM, Nathan Aherne wrote: Hi Julian, Thank you for replying. I was completely off grid for a while and only got back on it today. I thought that Vimage was probably the way to achieve what I want. The main reason I was staying away from Vimage was the reported bugs with it, another reason was the extra overhead. I would like to be able to shutdown jails quite regularly so was worried the kernel panic bug or memory leak bug might be a problem here. Is there any version of Vimage/FreeBSD which is stable? Generally vimage is stable. It has had problems with pf over the years becasue pf is imported from OpenBSD and has some pretty vimage-unfriendly assumptions in its design, but I hear that even some of thise have been ironed out. I know of vimage being used to run production virtual systems in some of the largest banks in the world processing amounts of trnasactions that would make your head spin so have a small play with it. Vimage overhead is negative in some situations. i.e. things work faster.. This is especially true when non vimage workloads contest a single lock heavily, but vimage splits it over many locks.. one for each VM. run up a virtualbox or amazon or whatever freebsd instance and play around with it. once realize how insanely powerful it is, you will wonder how you ever did jails without it. you can use bridges, epairs or netgraph to do your networking... your choice. Regards, Nathan On 23 Nov 2015, at 5:02 pm, Julian Elischer wrote: On 21/11/2015 10:06 AM, Nathan Aherne wrote: I had a bit of a think about how to describe what I am trying to achieve. I am treating each jail likes its own little "virtual machine”. The jail provides certain services, using things like nginx or nodejs, php-fpm, mysql or postgresql. The jails can control connections to themselves by configuring the firewall ports that are opened on the IP their IP (10.0.0.0/16 or a public IP). I know the jails have no firewall of their own, the firewall is configured from the host. I want each jail or “virtual machine” to be able to communicate with one another and the wider internet. When a jail does a DNS query for another App jail, it may get a public IP on its own Host (or it may get another host) and it has no issues being able to communicate with another jail on the same host. At the moment all of the above is working perfectly except for jail to jail communication on the same host (when the communication is not directly between 10.0.0.0/16 IP addresses). this is pretty much exactly when vimage/vnet jails could be used to great affect. Is there a reason you are not doing that? Each jail has it's own routing tables, addresses and (virtual) interfaces. here's how I'd do it with vimage +--+ +---+ | servers | +--+ | | +--+ | ++ | | |+--+ | | ++ +--+--++ | iface | | bridge | |+-+ | ++ ++-+ | | | | | | ++-+ | | | | | NAT jail router| | | | | +---+++---++ ||| | +--+--+ +--+--+ +--+--+ +--+--+ | | | | | | | | | | | | | | | | | | | | | | | |jails | | | | | | | | +-+ +-+ +-+ +-+ however the hairpin idea might still be useful even in that scenario if they don't know about each other's 'local' addresses, but do NAT'd machines need to talk to each other by externeal addresses? i Nathan On 21 Nov 2015, at 9:12 am, Nathan Aherne wrote: I am not exactly sure how to draw the setup so it doesn’t confuse the situation. The setup is extremely simple (I am not running vimage), jails running on the 10.0.0.0/16 (cloned lo1 interface) network or with public IPs. The jails with private IPs are the HTTP app jails. The Host runs a HTTP Proxy (nginx) and forwards traffic to each HTTP App jail based on the URL it receives. The jails with public IPs are things like database jails which cannot be proxied by the Host. I can happily communicate with any jail from my laptop (externally) but when I want one jail to communicate with another jail (fo
kernel panic in igb driver
During some performance tests and while a voice call was going through an igb interface, we attempt to disconnect and connect the cable. After the interface comes up this kernel panic occurs: Fatal trap 12: page fault while in kernel mode cpuid = 0; apic id = 00 fault virtual address = 0xc fault code = supervisor read data, page not present instruction pointer = 0x20:0x80e189b9 stack pointer = 0x28:0xff80ba3fe640 frame pointer = 0x28:0xff80ba3feb20 code segment = base 0x0, limit 0xf, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 12 (irq268: +) [ thread pid 12 tid 100114 ] Stopped at igb_start_locked+0x639: movzbl 0xc(%rbx),%esi Thanks in advance. ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
kernel panic in igb driver - more info
Any help would be appreciated db> trace Tracing pid 12 tid 100125 td 0xfe0004ecf000 igb_start_locked() at igb_start_locked+0x639/frame 0xff80e3464b20 igb_msix_que() at igb_msix_que+0xb7/frame 0xff80e3464b60 intr_event_execute_handlers() at intr_event_execute_handlers+0xfd/frame 0xff80e3464b90 ithread_loop() at ithread_loop+0x9d/frame 0xff80e3464be0 fork_exit() at fork_exit+0x11f/frame 0xff80e3464c30 fork_trampoline() at fork_trampoline+0xe/frame 0xff80e3464c30 --- trap 0, rip = 0, rsp = 0xff80e3464cf0, rbp = 0 --- db> Thanks in advance ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"