I checked my target topic and I see few messages than the source topic. (If
source topic have 5 messages, I see 2 messages in my target topic) What
settings I need to do ?
And, when I try to consume message from the target topic, I get ClassCast
Exception.
java.lang.ClassCastException: java.lang.
Hi Andrew,
Thank you for your reply. I did some experiments on the setting of ack ==
-1 and had a few seconds of downtime, but it's way better than losing
messages so i will go with that. Thanks again for your help.
-- Justin
On Mon, Oct 10, 2016 at 6:49 AM, Andrew Grasso
wrote:
> Hi Justin,
Hi all;
I have custom datatype defined (a pojo class).
I copy messages from one topic to another topic.
I do not see any messages in my target topic.
This works fro string messages, but not for my custom message.
Waht might be the cause?
I followed this sample [1]
[1]
https://github.com/apache/kaf
I checked my targetTopic for available messages, but it says '0' . What
might cause issue here to merge two topics with custom type messages ?
On 11 October 2016 at 14:44, Ratha v wrote:
> Thanks, this demo works perfectly for string messages.
>
> I have custom messageType defined( a java pojo
Thanks, this demo works perfectly for string messages.
I have custom messageType defined( a java pojo class). And i have SerDe
implemented for that.
Now after merging sourceTopic-->Target Topic,
I could not consume the messages..Means, Consumer does not return any
messages.
What might be the caus
The documentation is mostly fixed now: http://kafka.apache.org/
0101/documentation.html. Thanks to Derrick Or for all the help. Let me know
if anyone notices any additional problems.
-Jason
On Mon, Oct 10, 2016 at 1:10 PM, Jason Gustafson wrote:
> Hello Kafka users, developers and client-develo
With that link I came across the producer-perf-test tool, quite useful as
it gets rid of the Go (Sarama) variable. Since it can quickly tweak
settings, it's extremely useful.
As you suggested Eno, I attempted to copy the LinkedIn settings. With 100
byte records, I get up to about 600,000 records/s
Hello Kafka users, developers and client-developers,
This is the second candidate for release of Apache Kafka 0.10.1.0. This is
a minor release that includes great new features including throttled
replication, secure quotas, time-based log searching, and queryable state
for Kafka Streams. A full l
Hi,
I have a custom Kafka consumer which reads messages from a topic, hands over
the processing of the messages to a different thread, and while the messages
are being processed, it pauses the topic and keeps polling the Kafka topic (to
maintain heartbeats) and also commits offsets using commi
Sure, good ideas. I'll try multiple producers, localhost and LAN, to see if
any difference
Yep, Gwen, the Sarama client. Anything to worry about there outside of
setting the producer configs (which would you set?) and number of buffered
channels? (currently, buffered channels up to 10k).
Thanks!
Just note that in general doing what Todd advice is pretty risky.
We've seen controllers get into all kinds of weird situations when
topics were deleted from ZK directly (including getting stuck in an
infinite loop, deleting unrelated topics and all kinds of strangeness)
- we have no tests for thos
Out of curiosity - what is "Golang's Kafka interface"? Are you
referring to Sarama client?
On Sun, Oct 9, 2016 at 9:28 AM, Christopher Stelly wrote:
> Hello,
>
> The last thread available regarding 10GBe is about 2 years old, with no
> obvious recommendations on tuning.
>
> Is there a more comple
Hello David,
Your observation is correct, the stream time reasoning is dependent on the
buffered records from each of the input topic-partitions, and hence is
"data-driven".
Currently to get around this I'd recommend letting the producer to send
certain "marker" messages periodically to ensure st
Sachin, do you delete.topic.enable=true in your server.properties?
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On October 10, 2016 at 4:11:36 PM, Sachin Mittal (sjmit...@gmail.com) wrote:
Hi,
We are doing some testing and need to frequently wipe out the kafka logs
and delete the topic
Hi Chris,
I think the first step would be to set up the system so we can easily identify
the bottlenecks. With your setup I'm currently worried about 2 things:
1. the system is not being driven hard enough. In particular 1 producer might
not be enough. I'd recommend running 3 producer processes
> On Oct 6, 2016, at 09:59, Ismael Juma wrote:
>
> On Thu, Oct 6, 2016 at 2:51 PM, Avi Flax wrote:
>>
>> Does this mean that the next release (after 0.10.1.0, maybe ~Feb?) might
>> remove altogether the requirement that Streams apps be able to access
>> ZooKeeper directly?
>
>
> That's the p
Publisher/Subscriber systems can be divided into two categories.
1) Topic based model
2) Content based model - Provide accurate results compared to topic based
model, since subscribers interested on the content of the message rather
than subscribing to a topic and getting all the messages.
Kafka i
Hi ,
I am using kafka 0.9.0.x and in a multithreaded system I have created only
one instance of Kafka producer.
When I say producer.close(); it only closes communication and cannot send
the messages on topic.
BUT why its object is still valid if we cannot send requests to it.
Moreover, I ca
Thx for the responses. I was able to identify a bug in how the times are
obtained (offsets resolved as unknown cause the issue):
“Actually, I think the bug is more subtle. What happens when a consumed topic
stops receiving messages? The smallest timestamp will always be the static
timestamp
Hi,
We are doing some testing and need to frequently wipe out the kafka logs
and delete the topic completely. There is no problem starting/stopping
zookeeper and server multiple times.
So what is the best way of purging a topic and removing its reference from
zookeeper entirely.
I can physically
Hi Justin,
Setting the required acks to -1 does not require that all assigned brokers
are available, only that all members of the ISR are available. If a broker
goes down, the producer is able to commit messages once the faulty broker
is evicted from the ISR. This can continue even if only one bro
Hi Sachin,
Yes it will be called each time a key is modified, it will do this continuously
until you stop the app.
Eno
> On 9 Oct 2016, at 16:50, Sachin Mittal wrote:
>
> Hi,
> It is actually a KTable-KTable join.
>
> I have a stream (K1, A) which is aggregated as (Key, List) hence it
> creat
Hi Syed,
Apache Kafka runs on a JVM. I think the question you should ask is -- which
JVM does Apache Kafka require in production*? It doesn't really depend on
anything on a specific Linux distribution.
* ...and I don't have that answer ;-)
Cheers,
Jens
On Wednesday, October 5, 2016, Syed Hussai
Hi Guozhang,
At first, thank you answer my question, and give me some suggest.But, I'm sure
I readed some introduction about kafka.
In my producer, My Code is( c code):res = rd_kafka_conf_set(kafka_conf,
"request.required.acks", "-1", NULL, 0);res = rd_kafka_topic_conf_set(
topic_conf, "produce.
Aris,
I am not aware of an out of the box tool for Pcap->Kafka ingestion (in my
case back then we wrote our own). Maybe others know.
On Monday, October 10, 2016, Aris Risdianto wrote:
> Thank you for answer Michael.
>
> Actually, I have made a simple producer from Pcap to Kafka. Since it is
Thank you for answer Michael.
Actually, I have made a simple producer from Pcap to Kafka. Since it is not
structured, so it is difficult for further processing by a consumer. But, I
will take a look at Avro as you mentioned.
I just wondering, if there are any proper implementation for this
requir
Aris,
even today you can already use Kafka to deliver Netflow/Pcap/etc. messages,
and people are already using it for that (I did that in previous projects
of mine, too).
Simply encode your Pcap/... messages appropriately (I'd recommend to take a
look at Avro, which allows you to structure your d
Hello,
Is there any plan or implementation to use Kafka for delivering
sFlow/NetFlow/Pcap messages?
Best Regards,
Aris.
Elias,
yes, that is correct.
I also want to explain why:
One can always convert a KTable to a KStream (note: this is the opposite
direction of what you want to do) because one only needs to iterate through
the table to generate the stream.
To convert a KStream into a KTable (what you want to do
Depends on which partitioner you are using, see [1] and [2]. From what I
understand the `NewHashPartitioner` comes closest to the behavior of Kafka
Java producer, but instead of going round-robin for null-keyed messages it
picks a partition at random.
[1] https://godoc.org/github.com/Shopify/sa
30 matches
Mail list logo