Thanks for your information.
So there is no solution for this as of now ?
On Mon 10 Dec, 2018, 1:16 PM Shuyi Chen We've seen similar issue in our production, you can refer to this JIRA (
> https://issues.apache.org/jira/browse/FLINK-10848) for more detail.
>
> Shuyi
>
> On Sun, Dec 9, 2018 at 11
Hi Andrey,
I checked our code again. We are indeed using timers for dynamic routing
updates. It gets triggered every five minutes!
This must be the reason for the five minutes pattern in the remaining .sst
files.
Do I understand it correctly, that the files remain, because they are too small
f
Hi,
good that you found the cause of the problem in your configuration setting, but
unfortunately I think I cannot yet follow your reasoning. Can you explain why
the code would fail for a “slow” HDFS? If no local recovery is happening (this
means: job failover, with local recovery activated) t
Got it , my bad. I should have used backeteer. this seems to be working fine
StreamingFileSink.forBulkFormat[Request](
new Path(outputPath),
ParquetAvroWriters.forReflectRecord(classOf[Request]))
.withBucketAssigner(DateTimeBucketAssigner[Request])
.withBucketCheckInterval(5000L)
Hi,
Have you checked task managers logs?
Piotrek
> On 8 Dec 2018, at 12:23, Alieh wrote:
>
> Hello Piotrek,
>
> thank you for your answer. I installed a Flink on a local cluster and used
> the GUI in order to monitor the task managers. It seems the program does not
> start at all. The whole
Anyone can help ??
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
can anyone pls help ??
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
I have been facing issues while trying to read from a hdfs sequence file.
This is my code snippet
DataSource> input = env
.createInput(HadoopInputs.readSequenceFile(Text.class, Text.class,
ravenDataDir),
TypeInformation.of(new TypeHint>() {
}));
Upon executing this in
Dear community,
this is the weekly community update thread #50. Please post any news and
updates you want to share with the community to this thread.
# Unified core API for streaming and batch
The community started to discuss how to bring streaming and batch closer
together by implementing a com
Hi,
I think with how currently the assignment of tasks to slots works there is no
way of ensuring that the source tasks are evenly spread to the TaskManagers
(TaskExecutors). The rescale() API is from a time where scheduling worked a bit
different in Flink, I'm afraid.
I'm cc'ing Till, who mig
Hi Abhi Thakur,
We need more information to help you. What docker images are you using? Can
you share the kubernetes resource definitions? Can you share the complete
logs
of the JM and TMs? Did you follow the steps outlined in the Flink
documentation [1]?
Best,
Gary
[1]
https://ci.apache.org/pro
Hi Mingliang,
Aljoscha is right. At the moment Flink does not support to spread out tasks
across all TaskManagers. This is a feature which we still need to add.
Until then, you need to set the parallelism to the number of available
slots in order to guarantee that all TaskManagers are equally used
Hello,
this is the task manage log but it does not change after I run the
program. I think the Flink planner has problem with my program. It can
not even start the job.
Best,
Alieh
018-12-10 12:20:20,386 INFO
org.apache.flink.runtime.taskexecu
Could anyone please help me with this?
Thanks,
Akshay
On Mon, 10 Dec 2018, 6:05 pm Akshay Mendole Hi,
>I have been facing issues while trying to read from a hdfs sequence
> file.
>
> This is my code snippet
>
> DataSource> input = env
> .createInput(HadoopInputs.readSequenceFile(Text.clas
Hello,
We've been seeing an issue with several Flink 1.5.4 clusters that looks
like this:
1. Job is cancelled with a savepoint
2. The jar is deleted from our HA blobstore (S3)
3. The jobgraph in ZK is *not* deleted
4. We restart the cluster
5. Startup fails in recovery because the jar is not avai
Hi all,
I seem to find a problem of until condition in
testGreedyUntilZeroOrMoreWithDummyEventsAfterQuantifier in GreedyITCase.java. I
modify the unit test a little bit like this:
@Test
public void testGreedyUntilZeroOrMoreWithDummyEventsAfterQuantifier() throws
Exception {
ListStreamRecord
Sorry, it seems that I misunderstood the concept of the composition of the
until condition and oneOrMore.
Original Message
Sender:bupt_ljybupt_...@163.com
Recipient:useru...@flink.apache.org
Date:Tuesday, Dec 11, 2018 14:00
Subject:Something wrong with the until condition FLINK-CEP
Hi all,
I
17 matches
Mail list logo