I think you can use flume also.
Thanks,
Mudit
On 6/22/16, 12:29 PM, "Pariksheet Barapatre" wrote:
>Anybody have any idea on this?
>
>Thanks
>Pari
>
>On 20 June 2016 at 14:36, Pariksheet Barapatre
>wrote:
>
>> Hello All,
>>
>> I have data coming from sensors into kafka cluster in text format
Hi,
You can use Storm also, Here you have the option of rotating the file.
You can also write to Hive directly.
Best regards / Mit freundlichen Grüßen / Sincères salutations
M. Lohith Samaga
-Original Message-
From: Mudit Kumar [mailto:mudit.ku...@askme.in]
Sent: Wednesday, J
Hi,
I'm facing a strange issue in my Kafka cluster. Could anybody please help me
with it. The issue is as follows:
We have a 3 node kafka cluster. We installed the zookeeper separately and have
pointed the brokers to it. The zookeeper is also 3 node, but for our POC setup,
the zookeeper nodes
Thanks for your suggestions. I think if kafka connect provides the same
functionality as flume and storm, why should we go for another
infrastructure investment.
Kafka Connect effectively copies data from Kafka topic to HDFS through
connector. It supports avro as well as parquet, I am looking if
Hi,
Is is possible to configure the command line tools like
kafka-consumer-groups.sh , kafka-topics.sh and all other command that are
not a consumer or producer to connect to a SSL only kafka cluster ?
Regards,
Radu
Hi Pari,
Can you clarify which scenario you are looking to implement?
1) plaintext Kafka data --> plaintext HDFS data readable by hive
2) plaintext Kafka data --> avro/parquet HDFS data readable by hive
Regards,
On Wed, Jun 22, 2016 at 6:02 AM, Pariksheet Barapatre <
pari.data...@gmail.com> wr
You need to pass the correct options, similar to how you would do to a
client. We use the consumer-groups in a docker container, in an environment
witch is now only SSL (since the schema registry now supports it).
On Wed, Jun 22, 2016 at 2:47 PM Radu Radutiu wrote:
> Hi,
>
> Is is possible to co
To eleborate:
We start the process with --command-config /some/folder/ssl.properties the
file we include in the image, and contains the ssl properties it needs,
which is a subset of the properties (those specific for ssl) the client
uses. In this case the certificate is accessed in a data container
Hi All,
Where can i configure this, Please can some one suggest me the
configuration path in kafka
Banoth Kotesh
Computer Science and Engineering(2010-14),
NIT Rourkela,
+917338918143
Hi All,
We have seen data loss in Kafka, once we restarted the cluster.
We have 3 Brokers in a cluster. In order to prevent data loss we have
configured *min.insync.replicas=2* for all topics.
Initially, when all brokers(1,2,3) are live we produced few messages(say 50
messages). Now we killed br
Hi Madhukar,
It looks like you've had an unclean leader election in this case. Have a
look at the documentation for unclean.leader.election.enable and set it to
false if you'd like to try and avoid data loss in this scenario. By
default it is set to true.
Reference: http://kafka.apache.org/docu
Hi Rahul,
Whether the path is "/tmp/kafka-logs/" or "/temp/kafka-logs" ?
Mostly if path is set to "/tmp/" then in case machine restart it may delete
the files. So it is throwing FileNotFoundException.
you can change the file location to some other path and restart all broker.
This might fix the
Hi Dustin,
Thanks for your quick reply.
Yes we didn't set *unclear.leader.election.enable* property so it is taking*
true *by default. After setting it to *false* and repeating the same we
observed that, once at least one broker from ISR come back to life, then
only leader election happens and no
Hi Madhukar,
Thanks for your quick response. The path is "/tmp/kafka-logs/". But the servers
have not been restarted any time lately. The uptime for all the 3 servers is
almost 67 days.
Regards,
Rahul Misra
-Original Message-
From: Madhukar Bharti [mailto:bhartimadhu...@gmail.com]
Se
Hi Dustin,
I am looking for option 1.
Looking at Kafka Connect code, I guess we need to write converter code if
not available.
Thanks in advance.
Regards
Pari
On 22 June 2016 at 18:50, Dustin Cote wrote:
> Hi Pari,
>
> Can you clarify which scenario you are looking to implement?
> 1) plain
We seem to be having a strange issue with a cluster of ours; specifically with
the __consumer_offsets topic.
When we brought the cluster online, log compaction was turned off. Realizing
our mistake, we turned it on, but only after the topic had over 31,018,699,972
offsets committed to it. Log
Is the log cleaner thread running? We've seen issues where the log cleaner
thread dies after too much logged data. You'll see a message like this:
[kafka-log-cleaner-thread-0], Error due to
java.lang.IllegalArgumentException: requirement failed: 9750860 messages in
segment MY_FAVORITE_TOPIC_IS_SOR
I don't see any built-in support for this but I think that you can write a
class that implements io.confluent.connect.hdfs.Format
public interface Format {
RecordWriterProvider getRecordWriterProvider();
SchemaFileReader getSchemaFileReader(AvroData avroData);
HiveUtil getHiveUtil(HdfsSinkC
Thanks Unmesh for the detailed explanation.
When you change your topology's stateful operators -- for example, even if
you did not change a non-windowed aggregation to a windowed aggregation,
but just change the aggregate / reduce logic -- the underlying state stores
as well as their corresponding
Fascinating.
We are seeing no errors or warning in the logs after restart. It appears on
this broker that the compaction thread is working:
[2016-06-22 10:33:49,179] INFO Rolled new log segment for
'__consumer_offsets-28' in 1 ms. (kafka.log.Log)
[2016-06-22 10:34:00,968] INFO Deleting segme
Yes, I believe what you're looking for is what Dave described. Here's the
source of that interface
https://github.com/confluentinc/kafka-connect-hdfs/blob/master/src/main/java/io/confluent/connect/hdfs/Format.java
There
already exists a StringConverter that should handle the conversion in and
out
This smells like an bug to me.
On Wed, Jun 22, 2016 at 6:54 PM, Lawrence Weikum
wrote:
> Fascinating.
>
> We are seeing no errors or warning in the logs after restart. It appears
> on this broker that the compaction thread is working:
>
> [2016-06-22 10:33:49,179] INFO Rolled new log segment fo
By the way, https://issues.apache.org/jira/browse/KAFKA-3587 was fixed in
0.10.0.0.
Ismael
On Wed, Jun 22, 2016 at 7:28 PM, Tom Crayford wrote:
> Is the log cleaner thread running? We've seen issues where the log cleaner
> thread dies after too much logged data. You'll see a message like this:
Radu,
Please follow the instructions here
http://kafka.apache.org/documentation.html#security_ssl . At
the end of the SSL section we've an example for produce and
consumer command line tools to pass in ssl configs.
Thanks,
Harsha
On Wed, Jun 22, 2016, at 07:40
Hi Kotesh,
log.retention.hours sets how long messages are kept in the long, and
log.retention.check.interval.ms sets how often the log cleaner checks if
messages should be deleted based on the retention setting.
I hope this helps.
Alex
On Wed, Jun 22, 2016 at 3:13 AM, kotesh banoth wrote:
> H
Hi,
I searched a lot for my question and I did not find a good answer may
someone help me in this group?
When leader broker for a partition fails, ZK elects a new leader and this
may take seconds. What happens to data published to that broker during
election?
How Kafka handles messages to failed b
Many Thanks Dave and Dustin for your inputs. I will check code and try to
implement proposed solution.
Cheers
Pari
On 22 June 2016 at 23:25, Dustin Cote wrote:
> Yes, I believe what you're looking for is what Dave described. Here's the
> source of that interface
>
> https://github.com/confluen
If your producer has acks set to 0, or if the retries is set to 0, in the
properties, it will be lost, else it will most likely be retried and send
to the new leader.
On Thu, Jun 23, 2016 at 2:53 AM Saeed Ansari wrote:
> Hi,
> I searched a lot for my question and I did not find a good answer may
28 matches
Mail list logo