Also, what is the configuration for the servers? In particular it would be
good to know the retention and/or log compaction settings as those delete
files.

-Jay

On Sun, Jan 25, 2015 at 4:34 AM, Jaikiran Pai <jai.forums2...@gmail.com>
wrote:

> Hi Yonghui,
>
> Do you still have this happening? If yes, can you tell us a bit more about
> your setup? Is there something else that accesses or maybe deleting these
> log files? For more context to this question, please read the discussion
> related to this here http://mail-archives.apache.
> org/mod_mbox/kafka-dev/201501.mbox/%3C54C47E9B.5060401%40gmail.com%3E
>
>
> -Jaikiran
>
>
>> On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
>>
>>> CentOS release 6.3 (Final)
>>>
>>>
>>> 2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:
>>>
>>>  Yonghui,
>>>>             Which OS you are running.
>>>> -Harsha
>>>>
>>>> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
>>>>
>>>>> Yes  and I found the reason rename in deletion is failed.
>>>>> In rename progress the files is deleted? and then exception blocks file
>>>>> closed in kafka.
>>>>> But I don't know how can rename failure happen,
>>>>>
>>>>> [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
>>>>> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
>>>>> kafka.common.KafkaStorageException: Failed to change the log file
>>>>> suffix
>>>>> from  to .deleted for log segment 70781650
>>>>>          at kafka.log.LogSegment.changeFileSuffixes(LogSegment.
>>>>> scala:249)
>>>>>          at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:
>>>>> 636)
>>>>>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
>>>>>          at
>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>>          at
>>>>> kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
>>>>>          at scala.collection.immutable.List.foreach(List.scala:318)
>>>>>          at kafka.log.Log.deleteOldSegments(Log.scala:415)
>>>>>          at
>>>>>
>>>>>  
>>>>> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
>>>>
>>>>
>>>>>          at
>>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
>>>>>
>>>>>          at
>>>>> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
>>>>>
>>>>>          at
>>>>>
>>>>>  
>>>>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>>>>
>>>>
>>>>>          at scala.collection.Iterator$class.foreach(Iterator.scala:
>>>>> 727)
>>>>>          at scala.collection.AbstractIterator.foreach(
>>>>> Iterator.scala:1157)
>>>>>          at
>>>>> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>>>>>          at scala.collection.AbstractIterable.foreach(
>>>>> Iterable.scala:54)
>>>>>          at
>>>>>
>>>>>  scala.collection.TraversableLike$WithFilter.
>>>> foreach(TraversableLike.scala:771)
>>>>
>>>>>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
>>>>>          at
>>>>>
>>>>>  
>>>>> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
>>>>
>>>>
>>>>>          at
>>>>> kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>>>>>          at
>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>>>>>
>>>>>          at
>>>>>
>>>>>  
>>>>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>>>>
>>>>
>>>>>          at
>>>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
>>>> ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>>>>
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
>>>> runPeriodic(ScheduledThreadPoolExecutor.java:180)
>>>>
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ScheduledThreadPoolExecutor$
>>>> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>>>>
>>>>>          at
>>>>>
>>>>>  java.util.concurrent.ThreadPoolExecutor$Worker.
>>>> runTask(ThreadPoolExecutor.java:886)
>>>>
>>>>>          at
>>>>>
>>>>>  
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>
>>>>
>>>>>          at java.lang.Thread.run(Thread.java:662)
>>>>>
>>>>>
>>>>> 2015-01-07 13:56 GMT+08:00 Jun Rao <j...@confluent.io>:
>>>>>
>>>>>  Do you mean that the Kafka broker still holds a file handler on a
>>>>>>
>>>>> deleted
>>>>
>>>>> file? Do you see those files being deleted in the Kafka log4j log?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Jun
>>>>>>
>>>>>> On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zhaoyong...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>  Hi,
>>>>>>>
>>>>>>> We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
>>>>>>>
>>>>>> alert.
>>>>
>>>>> We find many kafka data files are deleted, but still opened by kafka.
>>>>>>>
>>>>>>> such as:
>>>>>>>
>>>>>>> _yellowpageV2-0/00000000000068170670.log (deleted)
>>>>>>> java       8446         root  724u      REG 253,2
>>>>>>>
>>>>>> 536937911
>>>>
>>>>> 26087362
>>>>>>>
>>>>>>>
>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>> topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
>>>>
>>>>> (deleted)
>>>>>>> java       8446         root  725u      REG 253,2
>>>>>>>
>>>>>> 536910838
>>>>
>>>>> 26087364
>>>>>>>
>>>>>>>
>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>> topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
>>>>
>>>>> (deleted)
>>>>>>> java       8446         root  726u      REG 253,2
>>>>>>>
>>>>>> 536917902
>>>>
>>>>> 26087368
>>>>>>>
>>>>>>>
>>>>>>>  /home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_
>>>> topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
>>>>
>>>>> (deleted)
>>>>>>>
>>>>>>>
>>>>>>> Is there anything wrong or wrong configed?
>>>>>>>
>>>>>>>
>>
>

Reply via email to