unsubscribe
Unsubscribe
Sent from Yahoo Mail for iPhone
once in memory. At least that’s my
observation and understanding of e.g. this documentation:
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/execution_mode/#batch-execution-mode
Best,
Simon
From: Alexis Sarda-Espinosa
Date: Wednesday, 18 September 2024 at 19:45
To
y the kindness by improving the docs (memory or config), if any
learnings apply to them.
Best,
Simon
Hi,
I deploy flink job in Google Kubernetes Engine(GKE) with enabled
checkpointing on google cloud storage (GS). I configure GS filesystem
with configuration described here
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/filesystems/gcs/
I also want to encrypt data
__MMDDHHMM__NN.data
where files are rolled over every hour or so and "rekeyed" into NN slots as
per the event key to retain logical order while having reasonable file
sizes.
I presume someone has already done something similar. Any pointer would be
great!
--
Simon Paradis
paradissi...@gmail.com
Hi All
Does current Flink support to set checkpoint properties while using Flink SQL ?
For example, statebackend choices, checkpoint interval and so on ...
Thanks,
SImon
//public boolean filter(String value) {
//return value.startsWith("http://";);
//}
//})
.writeAsText("/tmp/file313");
env.execute();
Thanks,
SImon
On 10/31/2019 17:23,Till Rohrmann wrote:
In order to run the program
til.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:511)
at
org.apache.flink.runtime.operators.util.TaskConfig.getStubWrapper(TaskConfig.java:288)
... 9 more
My understanding is that when using remote environment, I don’t need to upload
my program jar to flink cluster. So can anyone help me for this issue ?
Thanks,
SImon
://maven.aliyun.com/nexus/content/groups/public/)
My maven command :
mvn install -Dmaven.test.skip=true -Dcheckstyle.skip -Dlicense.skip=true
-Drat.ignoreErrors=true -DskipTests -Pvendor-repos -DskipTests
Can anyone help me for this ?
Thanks,
SImon
OK, Thanks Jark
Thanks,
SImon
On 08/13/2019 14:05,Jark Wu wrote:
Hi Simon,
This is a temporary workaround for 1.9 release. We will fix the behavior in
1.10, see FLINK-13461.
Regards,
Jark
On Tue, 13 Aug 2019 at 13:57, Simon Su wrote:
Hi Jark
Thanks for your reply.
It’s weird that
also important.
Thanks,
SImon
On 08/13/2019 12:27,Jark Wu wrote:
I think we might need to improve the javadoc of
tableEnv.registerTableSource/registerTableSink.
Currently, the comment says
"Registers an external TableSink with already configured field names and field
types in
("ca1")
.withBuiltInDatabaseName("db1")
.build());
As Dawid said, if I want to store in my custom catalog, I can call
catalog.createTable or using DDL.
Thanks,
SImon
On 08/13/2019 02:55,Xuefu Z wrote:
Hi Simon,
Thanks for reporting the problem. There is some rough edges aro
quot;orderstream");
tableEnv.connect(sinkKafka)
.withFormat(csv)
.withSchema(schema2)
.inAppendMode()
.registerTableSink("sinkstream");;
String sql = "insert into ca1.db1.sinkstream " +
"select tumble_start(ts, INTERVAL '5' SECOND) as t, max(data) from
ca1.db1.orderstream " +
"group by tumble(ts, INTERVAL '5' SECOND), data";
tableEnv.sqlUpdate(sql);
tableEnv.execute("test");
Thanks,
SImon
Hi Jiongsong
Thanks for your reply.
It seems that to wrap fields is a feasible way for me now. And there
already exists another JIRA FLINK-8921 try to improve this.
Thanks,
Simon
On 06/26/2019 19:21,JingsongLee wrote:
Hi Simon:
Does your code include the PR[1]?
If include: try
uot; grows beyond 64 KB
So is there any best practice for this ?
Thanks,
Simon
https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/checkpoints.html
Thanks,
Simon
On 06/25/2019 21:43,wangl...@geekplus.com.cn wrote:
I start and cancel it just in my intellij idea development environment.
First click the run button, then click the red stop button, and
Hi wanglei
Can you post how you restart the job ?
Thanks,
Simon
On 06/25/2019 20:11,wangl...@geekplus.com.cn wrote:
public class StateProcessTest extends KeyedProcessFunction, String> {
private transient ValueState> state;
public void processElement(Tuple2 value, Context ctx,
Col
Hi guys
Is there a easy way to handle external DB connections inside a RichFlatMap
Function?
--Thanks Simon
Hi
In other words, what's the easiest way to clean up states in flink, if this key
may never arrive again?
--Thanks
Simon
> On 02 Jun 2016, at 10:16, simon peyer wrote:
>
> Hi Max
>
> Thanks for your answer.
> We have some states, on some keys, which we would
threshold.
So I would like to have an option to perodically check the timestamp, and
remove old entries from the state.
Therefore I used the KeyedStream to key by a id.
Can I use the internal function to do the snapshoting, but use the snapshot
function to do some cleanup on the states?
Th
Hi Max
I'm using a keyby but would like to store the state.
Thus what's the way to go?
How do I have to handle the state in option 2).
Could you give an example?
Thanks
--Simon
> On 01 Jun 2016, at 15:55, Maximilian Michels wrote:
>
> Hi Simon,
>
> There are two
... 3 more
Caused by: java.lang.RuntimeException: No key available.
at
org.apache.flink.runtime.state.memory.MemValueState.update(MemValueState.java:69)
at Function which has the Checkpointed thingy
(CheckpointedIncrAddrPositions.scala:68)
What am I missing?
Thanks
Simon
e with the same id's on the nodes, will the checkpoint
be atomatically loaded and the state restored?
-Thanks
Simon
}
def mergeArgsWithProperties(configuration:Properties, args:
Array[String]):Properties = {
configuration
}
}
Have a nice day
--Simon
> On 24 May 2016, at 01:51, Bajaj, Abhinav wrote:
>
> I was gonna post the exact question and noticed this thread.
>
> It will be great if we can have a me
Hi I'm using log4j, running localy in cluster mode, with sh start_cluster.shNo warnings in the command line.Please find attached the Logging Files.Where are I'm supposed to find the logging information?In the log directory right?Cheers Simon
log4j-cli.properties
Description: Binary da
Error")
But it doesn't show on the console.
After reading a lot of websides about this issue I was not able to come to a
solution.
Do you guys have any ideas how to integrate the sl4fj logger into scala flink?
--Thanks
Simon
Hi Max
Thanks a lot for your helpful answer.
It now works on the cluster.
It would be great to have a method for loading from resources.
-Cheers
Simon
> On 23 May 2016, at 17:52, Maximilian Michels wrote:
>
> Hi Simon,
>
> AFAIK this is the way to go. We could add
ssing something?
Cheers
Simon
> On 23 May 2016, at 16:30, Stefano Baghino
> wrote:
>
> Are you using Maven to package your project? I believe the resources
> plugin[1] can suit your needs.
>
> [1]:
> http://maven.apache.org/plugins/maven-resources-plugin/examples/inc
ception: Properties file
file:/tmp/flink-web-upload-57bcc912-bc98-4c89-b5ee-c5176155abd5/992186c1-b3c3-4342-a5c8-67af133155e4pipeline-0.1.0-all.jar!/config.properties
does not exist
The property file is located in src/main/resources.
Do you have any idea of how to integrate such property files into the jar
package?
-Thanks
Simon
)
method:
public static final class Tokenizer extends RichFlatMapFunction> {
@Override
public void open(Configuration parameters) throws Exception {
parameters.getInteger("myInt", -1);
// .. do
Cheers
Simon
> On 23 May 2016, at 14:01,
onaws.com:8081/#/jobs/1d5973f40ab0951e7c733a2c439b013a/properties>Configuration
<http://ec2-52-58-144-190.eu-central-1.compute.amazonaws.com:8081/#/jobs/1d5973f40ab0951e7c733a2c439b013a/config>
)
Whats the most common way to handle properties in flink?
Is there a general way to go and any kind of integra
ustom configuration in there?
Thanks Simon
Hi
Is there already a version out for gradle?
compile 'org.apache.flink:flink-streaming-scala_2.11:1.1-SNAPSHOT' doesn't work
I'm using scala on eclipse.
And I would like to test the snapshot edition.
Any suggestions?
Cheers
Simon
> On 18 May 2016, at 14:51, Ovidiu-C
lease date of flink 1.1
Thanks
--Simon
> On 18 May 2016, at 13:45, Márton Balassi wrote:
>
> Hey Simon,
>
> The general policy is that the community aims to release a major version
> every 3 months. That would mean the next release coming out in early to mid
> Ju
Hi guys
When are you expecting to release a stable version of flink 1.1?
--Cheers
Simon
36 matches
Mail list logo