On Fri, 17 Jun 2016, 14:41 Gerard Klijs, <gerard.kl...@dizzit.com> wrote:

> What do you mean with a *docker volume*?


The container is started through docker-compose, using the VOLUMES keyword
to mount the folder where Kafka saves the log.
I'm running all of this on windows using the standard distribution of
docker for windows. Therefore I run docker-compose in windows but the
containers run in the docker-machine, a lightweight Linux VM.

I'm not using a data container, but I am trying to save the log files on
windows, as you said to keep them.

I am running other containers with the same configuration and I see their
files correctly stored on the windows filesystem.

So, why the other containers can access the files from windows and Kafka
can't?

I suspect the RandomAccessFile API is not working properly with the remote
(windows/docker-machine) mount.

Any other idea?

Thanks for the help Gerald!

Valerio


> On Fri, Jun 17, 2016 at 1:25 PM OGrandeDiEnne <ograndedie...@gmail.com>
> wrote:
>
> > Hello people,
> >
> > I'm running one single kafka broker from within a docker container. The
> > folder where kafka writes the logs is mounted as *docker volume* on my
> > system.
> >
> > As soon as I try to create a topic I get this error
> >
> > [2016-06-15 10:22:53,602] ERROR [KafkaApi-0] Error when handling request
> >
> >
> {controller_id=0,controller_epoch=1,partition_states=[{topic=mytopic,partition=0,controller_epoch=1,leader=0,leader_epoch=0,isr=[0],zk_version=0,replicas=[0]}],live_leaders=[{id=0,host=kafkadocker,port=9092}]}
> > (kafka.server.KafkaApis)
> > *java.io.IOException: Invalid argument*
> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
> > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:926)
> > *at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:75)*
> > at kafka.log.LogSegment.<init>(LogSegment.scala:58)
> > at kafka.log.Log.loadSegments(Log.scala:233)
> > at kafka.log.Log.<init>(Log.scala:101)
> > at kafka.log.LogManager.createLog(LogManager.scala:363)
> > at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> > at
> >
> >
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> > at
> >
> >
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> > at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
> > at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> > at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> > at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> > at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:239)
> > at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> > at
> >
> >
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:699)
> > at
> >
> >
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:698)
> > at
> >
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> > at
> >
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> > at
> >
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> > at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> > at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:698)
> > at
> >
> >
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:644)
> > at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:144)
> > at kafka.server.KafkaApis.handle(KafkaApis.scala:80)
> > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > The error is an IOException, so it looks like the broker has trouble
> trying
> > to access the log file.
> > Looks like Kafka assumes a feature of the underlying filesystem, which is
> > not present.
> >
> > I do not get any error if I keep the kafka log-files inside the docker
> > container.
> >
> > Have you seen the issue before ?
> >
> > Thanks.
> >
> > *Valerio*
> >
>

Reply via email to