Hi Kamil,
AFAIK, it should still not support Avro format in Python StreamingFileSink in
the Python DataStream API. However, I guess you could convert DataStream to
Table[1] and then you could use all the connectors supported in the Table &
SQL. In this case, you could use the FileSystem connect
> 2021-08-16 15:58:13.986 [Cancellation Watchdog for Source: MASKED] ERROR
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner - Fatal > error
> occurred while executing the TaskManager. Shutting it down...
> org.apache.flink.util.FlinkRuntimeException: Task did not exit gracefully
> with
Hi Rion,
Your solution is good.
It seems that you need enrich a stream with data queries from external Http
request. There is another solution for reference, just like the mechanism
of lookup join in Flink SQL.
Lookup Join in Flink SQL supports two modes: Async mode and Sync mode.
For each input d
Hey All,
I am running flink in docker containers (image Tag
:flink:scala_2.11-java11) on EC2.
I am able to connect to a Kinesis Connector but nothing is being consumed.
My command to start Jobmanager and TaskManager :
*docker run \--rm \--volume /root/:/root/ \--env
JOB_MANAGER_RPC_ADDR
Thanks Chesnay ! that helped me resolve the issue
On Fri, 6 Aug 2021 at 04:31, Chesnay Schepler wrote:
> The reason this doesn't work is that your application works directly
> against Hadoop.
> The filesystems in the plugins directory are only loaded via specific
> code-paths, specifically when
great, thanks for the pointers everyone.
i'm going to pursue rolling my own built around lettuce since it seems more
feature-full wrt async semantics.
On Mon, Aug 16, 2021 at 7:21 PM Yik San Chan
wrote:
> By the way, this post in Chinese showed how we do it exactly with code.
>
> https://yiksan
Really appreciate, Austin!
Hongbo
On Aug 17, 2021, 10:33 -0700, Austin Cawley-Edwards ,
wrote:
> Hi Hongbo,
>
> Thanks for your interest in the Redis connector! I'm not entirely sure what
> the release process is like for Bahir, but I've pulled in @Robert Metzger who
> has been involved in the
Hi all,
I'm observing an issue sometimes, and it's been hard to reproduce, where
task managers are not able to register with the Flink cluster. We provision
only the number of task managers required to run a given application, and
so the absence of any of the task managers causes the job to enter
Hi Hongbo,
Thanks for your interest in the Redis connector! I'm not entirely sure what
the release process is like for Bahir, but I've pulled in @Robert Metzger
who has been involved in the project in the past and
can give an update there.
Best,
Austin
On Tue, Aug 17, 2021 at 10:41 AM Hongbo Mi
Before these message, there is the following message in the log:
2021-08-12 23:02:58.015 [Canceler/Interrupts for Source: MASKED])
(1/1)#29103' did not react to cancelling signal for 30 seconds, but is
stuck in method:
java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
java.base@11.0.
Thanks Yangze, indeed, I see the following in the log about 10s before the
final crash (masked some sensitive data using `MASKED`):
2021-08-16 15:58:13.985 [Canceler/Interrupts for Source: MAKSED] WARN
org.apache.flink.runtime.taskmanager.Task - Task 'MASKED' did not react to
cancelling signal fo
Hi Flink friends,
I recently have a question about how to set TTL to make Redis keys expire in
flink-connector-redis.
I originally posted at Stack Overflow at
https://stackoverflow.com/questions/68795044/how-to-set-ttl-to-make-redis-keys-expire-in-flink-connector-redis
Then I found there is a p
Hi Caizhi,
I don’t mind the request being synchronous (or not using the Async I/O
connectors). Assuming I go down that route would this be the appropriate way to
handle this? Specifically creating an HttpClient and storing the result in
state and on a keyed stream if the state was empty?
It ma
Hi Andreas,
the problem here is that the command you're using is starting a per-job
cluster (which is obvious from the used deployment method "
YarnClusterDescriptor.deployJobCluster"). Apparently the `-m yarn-cluster`
flag is deprecated and no longer supported, I think this is something we
should
Hello,
I'm trying to save my data stream to an Avro file on HDFS. In Flink
documentation I can only see explanations for Java/Scala. However, I can't
seem to find a way to do it in PyFlink. Is this possible to do in PyFlink
currently?
Kind Regards
Kamil
Hi David,
Thanks for your answer. I finally managed to read ORC files by:
- switching to s3a:// in my Flink SQL table path parameter
- providing all the properties in Hadoop's core-site.xml file (fs.s3a.endpoint,
fs.s3a.path.style.access, fs.s3a.aws.credentials.provider, fs.s3a.access.key,
fs.s3
I removed that line from the code and it seems to have solved the problem.
Thank you very much! :)
All the best,
Laszlo
On Tue, Aug 17, 2021 at 9:54 AM László Ciople
wrote:
> Ok, thank you for the tips. I will modify it and get back to you :)
>
> On Tue, Aug 17, 2021 at 9:42 AM David Morávek wr
17 matches
Mail list logo