newreg=e7e25469132d4fbc8350be8f876cf81e
> , but all seem unhelpful.
> I've tested combinations of the following:
> * fileStreams created with dumb accept-all filters
> * newFilesOnly true and false,
> * tweaking minRememberDuration to high and low values,
> * on hdfs or l
alue(), is(testFileNumberLimit));
for (Path eachTempFile : tempFiles)
{
Files.deleteIfExists(eachTempFile);
}
Files.deleteIfExists(testDir);
}
From: Tathagata Das [mailto:t...@databricks.com]
Sent: Wednesday, July 15, 2015 00:01
To: Terry Hole
Cc: Hunter Morgan; user@spark.apache.o
DStream input = context.fileStream(indir,
>> LongWritable.class, Text.class, TextInputFormat.class, v -> true, false);
>> Also tried with having set:
>>
>> context.sparkContext().getConf().set("spark.streaming.minRememberDuration",
>> "1654564"); to big/small.
> true, false);
> Also tried with having set:
> context.sparkContext().getConf().set("spark.streaming.minRememberDuration",
> "1654564"); to big/small.
>
> Are there known limitations of the onlyNewFiles=false? Am I doing something
> wrong?
>
>
>
> --
> View this m
doing something
wrong?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/fileStream-with-old-files-tp23802.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-