Seems you may need implement a custom connector for Scylla DB as I haven't
found a connector on hand.
Hope the doc[1][2] can help you implement your own connector.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sourcessinks/
[2] https://flink.apache.org/2021/09/07/con
after few hours of running job manager and task manager generated using the
operator
i get the following message in the operator log
There really wasn't any traffic and the flink deployment is being delete
=== Finished metrics report
===
Deleting Fli
Team,
I'm looking for a solution to Consume/Read data from Scylla DB into Apache
Flink.
If anyone can guide me or share pointers it will be helpful.
Regards,
Himanshu
Hi,
>From the task manager's log, We can find the following exception stack
trace, seems it was the operating system related problem with rocksDB.
2022-06-04 14:45:53,878 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph [] - KEYED
> PROCESS, Map -> Sink: Print to Std. Out (1/1
Hi, what about use "Top1 + Agg" or "UDAF" for your scene.
The main idea I think is that when the event changed from status A to C, Flink
needs to send a `DELETE` data to downstream to delete the old data and send a
new one to downstream again. And `TOP1` will keep the newest one with same `Id`.
Hi Mark,
Could you share an example which could reproduce this issue?
Regards,
Dian
On Thu, Jun 9, 2022 at 9:22 PM Márk Bartos wrote:
> Hi,
>
> I'd like to ask for help regarding the java exception:
> Caused by: java.lang.ClassCastException: class java.sql.Timestamp cannot
> be cast to class j
I second Martijn, UNNEST should be supported.
Besides, regarding the above exception, could you share an example which
could reproduce this issue?
Regards,
Dian
On Mon, Jun 13, 2022 at 8:21 PM Martijn Visser
wrote:
> Hi John,
>
> You're mentioning that Flink doesn't support UNNEST, but it does
Hi Ivan,
Is your question how to parse the JSON string in PyFlink? If so, maybe you
could take a look at this [1].
Regards,
Dian
[1]
https://stackoverflow.com/questions/71820015/how-to-reference-nested-json-within-pyflink-sql-when-json-schema-varies
On Fri, Jun 10, 2022 at 7:40 PM ivan.ros...@a
Hi Christian,
thanks for the reply. We use AT_LEAST_ONCE delivery semantics in this
> application. Do you think this might still be related?
No, in that case, Kafka transactions are not used, so it should not be
relevant.
Best,
Alexander Fedulov
On Mon, Jun 13, 2022 at 3:48 PM Christian Lorenz
Hello,
I have a stream of events that are coming over Kafka. Each event progresses
through a series of statuses. I want to display aggregated output of how
many events are in a particular status. If suppose an event has progressed
from status A to Status C then that event needs to be only counted
Hi,
I have one flink job which has two tasks
Task1- Source some static data over https and keep it in memory, this keeps
refreshing it every 1 hour
Task2- Process some real-time events from Kafka and uses static data to
validate something and transform, then forward to other Kafka topic.
so far, e
Thanks, I'll check it out.
On Mon, Jun 13, 2022 at 2:40 AM Qingsheng Ren wrote:
> Hi Sucheth,
>
> If you are referring to Table / SQL API, I'm afraid it doesn't support
> schema evolution or different types from one Kafka table. An
> alternative way is to consume the topic with raw format [1] an
Hi Alexander,
thanks for the reply. We use AT_LEAST_ONCE delivery semantics in this
application. Do you think this might still be related?
Best regards,
Christian
Von: Alexander Fedulov
Datum: Montag, 13. Juni 2022 um 13:06
An: "user@flink.apache.org"
Cc: Christian Lorenz
Betreff: Re: Kafka
Hi Mike,
It would be worthwhile to check if this still occurs in Flink 1.14, since
Flink bumped to a newer version of RocksDB in that version. Is that a
possibility for you?
Best regards,
Martijn
Op ma 13 jun. 2022 om 15:21 schreef Mike Barborak :
> When trying to savepoint our job, we are get
Hi Martijn,
thanks for replying. I would also expect the behavior you describe below.
AFAICT it was also like this with Flink 1.14. I am aware that Flink is using
checkpointing for fault tolerance, but for example the Kafka offsets are part
of our monitoring and this will lead to alerts. Other
When trying to savepoint our job, we are getting the stack trace below. Is
there a way to know more about this failure? Like which function in the job
graph is associated with the problematic state and which key (assuming it is
keyed state)?
Or is there a fix for this exception? The only mentio
Hi, you can send any contents to user-unsubscr...@flink.apache.org to
unsubscribe.
在 2022-06-12 11:41:27,"chenshu...@foxmail.com" 写道:
unsubscribe
退订
chenshu...@foxmail.com
You’re a legend, thank you so much, I was looking on the internal functions
docs page, not that one!
John
Sent from my iPhone
On 13 Jun 2022, at 13:21, Martijn Visser wrote:
Hi John,
You're mentioning that Flink doesn't support UNNEST, but it does [1]. Would
this work for you?
Best regar
Hi sigalit,
It's a known bug that has been fixed in Flink 1.15.0. See [1] and [2] for
details.
[1] https://issues.apache.org/jira/browse/FLINK-27712
[2] https://issues.apache.org/jira/browse/FLINK-25454
Best,
Lijie
Sigalit Eliazov 于2022年6月13日周一 20:17写道:
> Hi all
>
>
> We are using the flink k8
Hi John,
You're mentioning that Flink doesn't support UNNEST, but it does [1]. Would
this work for you?
Best regards,
Martijn
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/dev/table/sql/queries/joins/#array-expansion
Op ma 13 jun. 2022 om 13:55 schreef John Tipper :
> H
Hi all
We are using the flink k8s operator latest version with flink 1.14 in order
to deploy our pipelines in application mode (one job per cluster, one job
manager + one task manager)
Once in a few minutes I receive the following error in the job manager and
all the tasks are being restarted.
Hi all,
Flink doesn’t support the unnest() function, which takes an array and creates a
row for each element in the array. I have column containing an array of
map and I’d like to implement my own unnest.
I try this as an initial do-nothing implementation:
@udtf(result_types=Datatypes.MAP(
Can we find a more robust way to support this?
Both flink-shaded, any relocation pattern and
JsonRowDataSerializationSchema are Flink internals that users shouldn't
use/rely on.
On 13/06/2022 12:26, Qingsheng Ren wrote:
Hi Andrew,
This is indeed a tricky case since Flink doesn't provide non
Hi Christian,
you should check if the exceptions that you see after the broker is back
from maintenance are the same as the ones you posted here. If you are using
EXACTLY_ONCE, it could be that the later errors are caused by Kafka purging
transactions that Flink attempts to commit [1].
Best,
Alex
Hi Andrew,
This is indeed a tricky case since Flink doesn't provide non-shaded
JAR for flink-json. One hacky solution in my mind is like:
1. Create a module let's say "wikimedia-event-utilities-shaded" that
relocates Jackson in the same way and uses the same Jackson version as
flink-shaded-jackso
Hi Christian,
I would expect that after the broker comes back up and recovers completely,
these error messages would disappear automagically. It should not require a
restart (only time). Flink doesn't rely on Kafka's checkpointing mechanism
for fault tolerance.
Best regards,
Martijn
Op wo 8 jun
Hi Aaron,
There's currently no support in Flink indeed to insert an UUID data type
into Postgres. The Jira ticket you've included [1] is indeed the same
issue. It's just that the solution is most likely not to map it as a RAW
type, but use a STRING type instead. Is this something where you might w
Hi Sucheth,
If you are referring to Table / SQL API, I'm afraid it doesn't support
schema evolution or different types from one Kafka table. An
alternative way is to consume the topic with raw format [1] and do
deserialization with a UDTF. If you are using the DataStream API, you
can implement the
Hi,
In order to unsubscribe, please send an email to
user-unsubscr...@flink.apache.org
Best regards,
Martijn
Op vr 10 jun. 2022 om 17:23 schreef :
> Unsubscribe
>
Hi,
In order to unsubscribe, please send an email to
user-unsubscr...@flink.apache.org
Best regards,
Martijn
Op za 11 jun. 2022 om 19:46 schreef tarun joshi <1985.ta...@gmail.com>:
> Unsubscribe
>
Hi,
I believe this is a case where for the FileSystem (both Source and Sink)
the metrics that are defined as part of FLIP-33 [1] have not been
implemented yet. I've created a ticket for that [2].
Best regards,
Martijn
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-33%3A+Standardize+
31 matches
Mail list logo