A 10/10/2013, às 23:14, S Ahmed <sahmed1...@gmail.com> escreveu: > Is anyone out there running a single broker kafka setup? > > How about with only 8 GB RAM? > > I'm looking at one of the better dedicated server prodivers, and a 8GB > server is pretty much what I want to spend at the moment, would it make > sense going this route? > This same server would also potentially be running zookeeper also. > > In terms of messages per second, at most I would be seeing about 2000 > messages per second, of 20KB to 200KB in size. > > I know the people at linkedin are running with I believe 24GB of ram.
My personal newbie experience, which is surely completely wrong and miss-configured, got me up to 70MB/sec, either with controlled 1K messages (hence 70Kmsg/sec) as well as with more random data (test data from 100 bytes to a couple MB). First I thought the 70MB were the hard disk limit, but when I got the same result both with a proper linux server with a 10K disk, as well as with a Mac mini with a 5400rpm disk, I got confused. The mini has 2G, the linux server has 8 or 16, can'r recall at the moment. The test was performed both with single and multi producers and consumers. One producer = 70MB, two producers = 35MB each and so forth. Running standalone instances on each server, same value. Running both together in 2 partition 2 replica crossed mode, same result. As far as I understood, more memory just means more kernel buffer space to speed up the lack of disk speed, as kafka seems to not really depend on memory for the queueing.
signature.asc
Description: Message signed with OpenPGP using GPGMail