Hello,

+1, same problem when I tried it. However, I dove into the code examples so
I can't give you a solution.

2016-04-19 17:20 GMT+02:00 Ramanan, Buvana (Nokia - US) <
buvana.rama...@nokia.com>:

> Hello,
>
> I went thru QuickStart instructions at:
> http://docs.confluent.io/2.1.0-alpha1/streams/quickstart.html
>
> Downloaded confluent-2.1.0-alpha1, started ZK & kafka servers.
> Continuously producing to topic : streams-file-input.
> However, running the WordCountJob example throws error (pasting the
> message below).
>
> java version "1.7.0_79"
> OpenJDK Runtime Environment (IcedTea 2.5.5) (7u79-2.5.5-0ubuntu0.14.04.2)
> OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
>
> Please help resolve the issue.
>
> thanks,
> Buvana
>
> ~/confluent-2.1.0-alpha1$ ./bin/kafka-run-class
> org.apache.kafka.streams.examples.wordcount.WordCountJob
> [2016-04-19 11:09:14,223] WARN The configuration zookeeper.connect =
> localhost:2181 was supplied but isn't a known config.
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2016-04-19 11:09:14,344] WARN The configuration num.standby.replicas = 0
> was supplied but isn't a known config.
> (org.apache.kafka.clients.consumer.ConsumerConfig)
> [2016-04-19 11:09:14,344] WARN The configuration zookeeper.connect =
> localhost:2181 was supplied but isn't a known config.
> (org.apache.kafka.clients.consumer.ConsumerConfig)
> [2016-04-19 11:09:14,344] WARN The configuration
> __stream.thread.instance__ = Thread[StreamThread-1,5,main] was supplied but
> isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
> [2016-04-19 11:09:14,350] WARN The configuration zookeeper.connect =
> localhost:2181 was supplied but isn't a known config.
> (org.apache.kafka.clients.consumer.ConsumerConfig)
> Exception in thread "StreamThread-1" java.lang.NoSuchMethodError:
> com.fasterxml.jackson.core.JsonGenerator.setCurrentValue(Ljava/lang/Object;)V
>     at
> com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:445)
>     at
> com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:29)
>     at
> com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:129)
>     at
> com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3387)
>     at
> com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:2781)
>     at
> org.apache.kafka.streams.processor.internals.InternalTopicManager.createTopic(InternalTopicManager.java:178)
>     at
> org.apache.kafka.streams.processor.internals.InternalTopicManager.makeReady(InternalTopicManager.java:89)
>     at
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:362)
>     at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:227)
>     at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:393)
>     at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$700(AbstractCoordinator.java:81)
>     at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:343)
>     at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:324)
>     at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>     at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>     at
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>     at
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>     at
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>     at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>     at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
>     at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>     at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>     at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>     at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>     at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>     at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>     at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>     at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
>     at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:325)
>     at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:248)
>
>

Reply via email to