Robert,

If both the probe- and the collector-ntopng write to the same DB, then
you'll end up having duplicated flows.

Since the main goal you have is redundancy, I would go for the following
setup:

- Probe A: local ntopng + zmq to collector + local nprobe (optional) + flow
dump to MySQL_schema_A
- Probe B: local ntopng + zmq to collector + local nprobe (optional) + flow
dump to MySQL_schema_B
- Collector: ntopng + zmq from probes + local nprobe (optional) + flow dump
to MySQL_schema_collector

Each MySQL schema should have its own replication configuration / HA.
Ideally, they should belong to independent clusters.

So: if a probe fails, the collector keeps on receiving flows from the
running probe. As soon as the failed probe comes up again, the collector
will start receiving its flows.

If the Collector fails, each probe will still write flows on its own
database. Upon collector recovery, you can coalesce MySQL_schema_A and
MySQL_schema_B into MySQL_schema_collector without loss.

--------
Regarding the use of nprobe, it depends on the speed you want to achieve.
In high-speed environments nProbe is a necessary component.

Simone


On Tue, Mar 8, 2016 at 6:24 PM, Finze, Robert <[email protected]
> wrote:

> Hi List,
>
> in a previous thread I've started asking some questions about a
> redundant setup for collecting flows with multiple probes and a central
> collector.
>
> The idea is to have multiple probes which create netflows from
> port-mirrors and then send these flows to a central collector.
>
> Since all/both probes receive the same traffic all flows will still be
> captured in case one probe goes offline.
> To not loose flows in case the collector is offline, the probes should
> save the flows parallel to a database.
> Since ntop and nprobe have different sql schemas, all DB stuff needs to
> be done by ntop.
>
> I've come up with 2 suggestions which I'd like to put up for discussion:
>
> ===================================================
>
> Each probe is running 2 nprobe on 2 nics and 2 ntop to save flows to a
> remote sql-cluster. Additionally the nprobe send netflows to the central
> collector.
>
> The collector is also running nprobe to collect the flows and forward
> them to ntop.
>
> Server A1 (Probe):
> ------------------
> nprobe --zmq tcp://*:5551 -i eth1 -V 10 -G -n serverB:2055
> nprobe --zmq tcp://*:5552 -i eth2 -V 10 -G -n serverB:2055
>
> ntopng -i tcp://127.0.0.1:5551 -q -F
> "mysql;ip-sql-cluster;flowdb;ntopdb;dbuser;dbuserpw"
> ntopng -i tcp://127.0.0.1:5552 -q -F
> "mysql;ip-sql-cluster;flowdb;ntopdb;dbuser;dbuserpw"
>
> Server A2 (Probe):
> ------------------
> nprobe --zmq tcp://*:5551 -i eth1 -V 10 -G -n serverB:2055
> nprobe --zmq tcp://*:5552 -i eth2 -V 10 -G -n serverB:2055
>
> ntopng -i tcp://127.0.0.1:5551 -q -F
> "mysql;ip-sql-cluster;flowdb;ntopdb;dbuser;dbuserpw"
> ntopng -i tcp://127.0.0.1:5552 -q -F
> "mysql;ip-sql-cluster;flowdb;ntopdb;dbuser;dbuserpw"
>
> Server B (Collector):
> ---------------------
> nprobe --zmq tcp://*:5551 -V 10 -i none --collector-port 2011 -n none -G
>
> ntopng -i tcp://127.0.0.1:5551 -d /storage/ntopng -q -F
> "mysql;ip-sql-cluster;flowdb;ntopdb;dbuser;dbuserpw"
>
> ===================================================
> ===================================================
>
> This setup does no use nprobe but uses cascaded ntopng to collect and
> forward flows forward.
> One question is if multiple ntop on the same server are possible and
> would display the same data.
>
> Server A1 (Probe):
> ------------------
> ntopng -i eth1 -i eth2 -I tcp://*.4441 -F
> "mysql;ip-sql-cluster;flowdb;ntopdb;dbuser;dbuserpw"
>
> Server A2 (Probe):
> ------------------
> ntopng -i eth1 -i eth2 -I tcp://*.4442 -F
> "mysql;ip-sql-cluster;flowdb;ntopdb;dbuser;dbuserpw"
>
>
> Server B (Collector):
> ---------------------
> ntopng -i tcp://serverA1:4441 -i tcp://serverA1:4442 -F
> "mysql;ip-sql-cluster;flowdb;ntopdb;dbuser;dbuserpw"
>
>
> ===================================================
>
>
> I didn't have yet time to test these setups but would like to know how
> they compare.
> If they achieve the required redundancy, which is more robust, which has
> less overhead for high-volume traffic, which is easier to maintain, etc.
>
> Bonus: on server B being able to tell from which source
> (server/interface) a flow originates
>
> Any thoughts, suggestions and questions are welcome.
>
>
> Cheers
>
> Robert
>
> _______________________________________________
> Ntop mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop
>
_______________________________________________
Ntop mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop

Reply via email to