> To me, the performance numbers themselves don't matter as much as > managing expectations does: should I *expect* to be able to pass all > of my events through broker?
This question depends on the event type and your concrete topology, and is hard to answer in general. We can say "in our point-to-point test scenario, our measurements show and upper bound of X events/sec for a workload consisting of message type Y." As Broker gets more traction, I assume we will get much more data points and a better understanding on the performance boundaries. > Trying to express things a slightly different way, I was concerned > that the different numbers from the different libraries were being > interpreted as an apples-to-apples comparison. Modifying CAF to > achieve the same results as e.g. 0mq would, at some point and in some > way, eventually require modifying CAF to be more like 0mq. I don't > think that would be good, because 0mq and CAF aren't (and shouldn't > be, in my humble opinion) the same thing. 0mq/nanomsg are only a thin wrapper around a blob of bytes, whereas CAF provides much more than that. However, I don't think the comparison we did was unrealistic: we looked at the overhead of sending a stream of simple (nearly empty) messages between two remote endpoints. This "dumbs down" CAF to a point where we're primarily stressing the messaging subsystem, without using much of the higher-level abstractions (CAF still has to go through its serialization layer). After Dominik's performance tweaks, the two libraries operate in the same order of magnitude, which strikes me as reasonable. 0mq still outperforms CAF in terms of maximum message rate for this specific workload, but this is also not surprising at this point, because it has received a lot of attention and optimizations over the past years specifically targeting high-throughput scenarios. Matthias _______________________________________________ bro-dev mailing list bro-dev@bro.org http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev