In addition to the issue you bring up, the functionality as a whole has
changed.. when you call OffsetFetchRequest the version = 0 needs to
preserve the old functionality
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/server/KafkaApis.scala#L678-L700
and version = 1 the new
ht
Will do. What did you have in mind? just write a big file to disk and
measure the time it took to write? maybe also read back? using specific
API's?
Apart from the local Win machine case, are you aware of any issues with
Amazon EC2 instances that may be causing that same latency in production?
Than
Hi Srividhya,
See
http://search-hadoop.com/m/4TaT4B9tys1/&subj=Re+Kafka+0+8+2+release+before+Santa+Claus
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
On Mon, Jan 5, 2015 at 11:55 AM, Srividhya Shanmugam <
sriv
Not setting "log.flush.interval.messages" is good since the default gives
the best latency. Could you do some basic I/O testing on the local FS in
your windows machine to make sure the I/O latency is ok?
Thanks,
Jun
On Thu, Jan 1, 2015 at 1:40 AM, Shlomi Hazan wrote:
> Happy new year!
> I did
I'm using 0.82beta and I'm trying to push data with the mirrormaker tool
from several remote sites to two datacenters. I'm testing this from a node
containing zk, broker and mirrormaker and the data is pushed to a "normal"
cluster. 3 zk and 4 brokers with replication.
While the configuration seems
Hi,
That sounds a bit like needing a full, cross-app, cross-network
transaction/call tracing, and not something specific or limited to Kafka,
doesn't it?
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
On Mon, Ja
ok, opened KAFKA-1841 . KAFKA-1634 also related.
-Dana
On Mon, Jan 5, 2015 at 10:55 AM, Gwen Shapira wrote:
> Ooh, I see what you mean - the OffsetAndMetadata (or PartitionData)
> part of the Map changed, which will modify the wire protocol.
>
> This is actually not handled in the Java client
Hi Kafka Team/Users,
We are using Linked-in Kafka data pipe-line end-to-end.
Producer(s) ->Local DC Brokers -> MM -> Central brokers -> Camus Job ->
HDFS
This is working out very well for us, but we need to have visibility of
latency at each layer (Local DC Brokers -> MM -> Central brokers -> Ca
Ooh, I see what you mean - the OffsetAndMetadata (or PartitionData)
part of the Map changed, which will modify the wire protocol.
This is actually not handled in the Java client either. It will send
the timestamp no matter which version is used.
This looks like a bug and I'd even mark it as block
" preinitialize.metadata=true/false" can help to certain extent. if the
kafka cluster is down, then metadata won't be available for a long time
(not just the first msg). so to be safe, we have to set "
metadata.fetch.timeout.ms=1" to fail fast as Paul mentioned. I can also
echo Jay's comment that o
specifically comparing 0.8.1 --
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/api/OffsetCommitRequest.scala#L37-L50
```
(1 to partitionCount).map(_ => {
val partitionId = buffer.getInt
val offset = buffer.getLong
val metadata = readShortString(buffer)
(TopicAndPartit
@Sa,
the required.acks is producer side configuration. Set to -1 means requiring
ack from all brokers.
On Fri, Jan 2, 2015 at 1:51 PM, Sa Li wrote:
> Thanks a lot, Tim, this is the config of brokers
>
> --
> broker.id=1
> port=9092
> host.name=10.100.70.128
> num.network.threads=4
> num
Several features in Zookeeper depend on server time. I would highly recommend
that you properly setup ntpd (or whatever), then try to reproduce.
-Jon
On Jan 2, 2015, at 2:35 PM, Birla, Lokesh wrote:
> We donĀ¹t see zookeeper expiration. However I noticed that our servers
> system time is NOT sy
Ah, I see :)
The readFrom function basically tries to read two extra fields if you
are on version 1:
if (versionId == 1) {
groupGenerationId = buffer.getInt
consumerId = readShortString(buffer)
}
The rest looks identical in version 0 and 1, and still no timestamp in sight...
Gwe
Hi Gwen, I am using/writing kafka-python to construct api requests and have
not dug too deeply into the server source code. But I believe it is
kafka/api/OffsetCommitRequest.scala and specifically the readFrom method
used to decode the wire protocol.
-Dana
OffsetCommitRequest has two constructors
OffsetCommitRequest has two constructors now:
For version 0:
OffsetCommitRequest(String groupId, Map offsetData)
And version 1:
OffsetCommitRequest(String groupId, int generationId, String
consumerId, Map offsetData)
None of them seem to require timestamps... so I'm not sure where you
see that
Kafka Team,
We are currently using the 0.8.2 beta version with a patch for KAFKA-1738. Do
you have any updates on when 0.8.2 final version will be released?
Thanks,
Srividhya
This email and any files transmitted with it are confidential, proprietary and
intended solely for the individual or e
17 matches
Mail list logo