It can, but it will not perform very well. Kafka fully instantiates
messages in memory (as a byte[] basically) so if you send a 100MB message
the server will do a 100MB allocation to hold that data prior to writing to
disk.
I think MongoDB does have blob support so passing a pointer via Kafka as
y
Or HDFS and use Kafka for the event of file, yup. Processing on the files can
be done without the mapreduce overhead in Hadoop now using Apache Tez (or
something that use Tez like Pig).
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Sourc
Hello everyone,
Can Kafka be used for binary large objects of 100 MB ?
Or should I use a different solution to store these files like MongoDB and
maybe send the location of these files in MongoDB over Kafka?
Thanks is advance,
Wouter