Hi Till,
Thanks for your answer!
I will try the approach you suggested, hope I can make it work.
Best regards
martin
Den ons. 2. dec. 2020 kl. 17.03 skrev Till Rohrmann :
> Hi Martin,
>
> In general, Flink's MiniCluster should be able to run every complete Flink
> JobGraph. However, from what I
Glad to hear that you solved this issue!
Best,
Yun--
Sender:narasimha
Date:2020/12/06 21:35:33
Recipient:Yun Gao
Cc:user
Theme:Re: How to parse list values in csv file
thanks for you email.
Translated csv to JSON, read it as a pla
Hi Rex,
I tried a similar example[1] but did not reproduce the issue, which version
of Flink you are using now ?
Best,
Yun
[1] The example code:
StreamExecutionEnvironment bsEnv =
StreamExecutionEnvironment.getExecutionEnvironment();
bsEnv.setRestartStrategy(RestartStrategies.noRestart())
1. How can I create a kafka table that can use headers and map them to columns?
Currently, I am using KafkaDeserilizationSchema to create a DataStream, and
then I convert that DataStream into a Table. I would like to use a more direct
approach.
2. What is the recommended way to enrich a kafka t
Hi, Amr
What sink do you use? I think it means that the sink does not support the
"upsert".
If you use Kafka as a sink[1] I think you could choose to try it after 1.12.
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/upsert-kafka.html
Best,
Guowei
On Mon, Dec 7, 2
FYI, I've opened FLINK-20503 for this.
https://issues.apache.org/jira/browse/FLINK-20503
Thank you~
Xintong Song
On Mon, Dec 7, 2020 at 11:10 AM Xintong Song wrote:
> I forgot to mention that it is designed that task managers always have
> `Double#MAX_VALUE` cpu cores in local execution.
>
>
Hi Yang,
Why your checkpoint is failed, was that checkpoint expired or failed due to
error?
Could you paste the jstack result of what are RocksDB doing during checkpoint?
BTW, you could also use async-profiler [1] to view what the CPU operation of
your actions, this tool could help to view wh
I forgot to mention that it is designed that task managers always have
`Double#MAX_VALUE` cpu cores in local execution.
And I think Yangze is right. The log "The configuration option
taskmanager.cpu.cores required for local execution is not set, setting it
to" can be misleading for users. Will fi
Hi Rex,
We're running this in a local environment so that may be contributing to
> what we're seeing.
>
Just to double check on this. By `local environment`, you mean running
flink without setting up a standalone cluster or submitting it to a
K8s/Yarn cluster? (Typically executing from an IDE, run
I think you could use the following config options to set the environments
for JobManager and TaskManager.
And then you could use the envs in the log4j configuration file.
"${env:PIPELINE}" could be used in log4j2.
containerized.master.env.PIPELINE: my-flink-pipeline
containerized.taskmanager.env.
My gut feeling is your "vmArgs" does not take effect.
Best,
Yangze Guo
On Mon, Dec 7, 2020 at 10:32 AM Yangze Guo wrote:
>
> Hi, Rex,
>
> Can you share more logs for it. Did you see something like "The
> configuration option taskmanager.cpu.cores required for local
> execution is not set, settin
Hi, Rex,
Can you share more logs for it. Did you see something like "The
configuration option taskmanager.cpu.cores required for local
execution is not set, setting it to" in your logs?
Best,
Yangze Guo
Best,
Yangze Guo
On Sat, Dec 5, 2020 at 6:53 PM David Anderson wrote:
>
> taskmanager.cpu.
Hi, Jakub
In theory there should not be any problem because you could register the
function object.
So would you like to share your code and the shell command that you submit
your job?
Best,
Guowei
On Mon, Dec 7, 2020 at 3:19 AM Jakub N wrote:
> The current setup is: Data in Kafka -> Kafka Conn
Hello,
I am newbie in Flink, I am stuck and looking for help, I want to join
Streams A, B, C, D from csv source files, some of the streams update
frequently and I have another stream high throughput from Kafka K and I need
to filter K stream from [A,B,C,D]. I tried using Flink table API, Union a
The current setup is: Data in Kafka -> Kafka Connector ->
StreamTableEnvironment -> execute Flink SQL queries
I would like to register Flink's User-defined Functions from a jar or java
class file during runtime. What I have tried so far is using Java's Classloader
getting an instance of a Scala
Hi,
We're using Apache Flink 1.9.2 and we've started logging everything as JSON
with log4j (standard log4j1 that comes with Flink). When I say JSON logging, I
just mean that I've formatted in according to:
log4j.appender.console.layout.ConversionPattern={"level": "%p", "ts":
"%d{ISO8601}", "cl
thanks Fabian for responding.
flink image : registry.ververica.com/v2.2/flink:1.11.1-stream1-scala_2.12
There are no errors as such. But it is just considering the first job.
On Thu, Dec 3, 2020 at 5:34 PM Fabian Paul
wrote:
> Hi Narasimha,
>
> Nothing comes to my mind immediately why it shou
thanks for you email.
Translated csv to JSON, read it as a plain text file and then processed to
objects.
It solved my use case.
On Fri, Dec 4, 2020 at 12:24 PM Yun Gao wrote:
>
> Hi,
>
> The CSV only supports the types listed in [1] and must use the types
> in this list, thus for other t
18 matches
Mail list logo