Hmm, not sure what the issue is. Any windows user wants to chime in?

Thanks,

Jun


On Tue, Jul 9, 2013 at 9:00 AM, Denny Lee <denny.g....@gmail.com> wrote:

> Hey Jun,
>
> We've been running into this issue when running perf.Performance as per
> http://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/.
> When running it using 100K messages, it works fine on Windows with about
> 20-30K msg/s.  But when running it with 1M messages, then the broker fails
> as per the message below.  It does not appear that modifying the JVM
> memory configurations nor running on SSDs has any effect.   As for JVMs -
> no plug ins and we've tried both 1.6 and OpenJDK 1.7.
>
> This looks like a JVM memory map issue on Windows issue - perhaps running
> some System.gc() to prevent the roll?
>
> Any thoughts?
>
> Thanks!
> Denny
>
>
>
>
> On 7/9/13 7:55 AM, "Jun Rao" <jun...@gmail.com> wrote:
>
> >A couple of users seem to be able to get 0.8 working on Windows. Any thing
> >special about your Windows environment? Are you using any jvm plugins?
> >
> >Thanks,
> >
> >Jun
> >
> >
> >On Tue, Jul 9, 2013 at 12:59 AM, Timothy Chen <tnac...@gmail.com> wrote:
> >
> >> Hi all,
> >>
> >> I've tried pushing a large amount of messages into Kafka on Windows, and
> >> got the following error:
> >>
> >> Caused by: java.io.IOException: The requested operation cannot be
> >>performed
> >> on a
> >>  file with a user-mapped section open
> >>         at java.io.RandomAccessFile.setLength(Native Method)
> >>         at kafka.log.OffsetIndex.liftedTree2$1(OffsetIndex.scala:263)
> >>         at kafka.log.OffsetIndex.resize(OffsetIndex.scala:262)
> >>         at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:247)
> >>         at kafka.log.Log.rollToOffset(Log.scala:518)
> >>         at kafka.log.Log.roll(Log.scala:502)
> >>         at kafka.log.Log.maybeRoll(Log.scala:484)
> >>         at kafka.log.Log.append(Log.scala:297)
> >>         ... 19 more
> >>
> >> I suspect that Windows is not releasing memory mapped file references
> >>soon
> >> enough.
> >>
> >> I wonder if there is any good workaround or solutions for this?
> >>
> >> Thanks!
> >>
> >> Tim
> >>
>
>
>

Reply via email to