Is there a clean way of exposing a metrics regarding the number of keys
(even if it is an estimate using 'rocksdb.estimate-num-keys') in a rocksdb
state store? RocksDBValueState is not public to users code.
Best,
Ahmed
Hi,
I'm working on a project using flink with Spring boot, when i run the
application i get an exception:
Cannot determine the size of the physical memory for Windows host (using
'wmic memorychip'): Cannot run program "wmic": CreateProcess error=2, The
system cannot find the file specified
java.io
Hi,
How can i setParallelism in a generic way that provide better performance
on any device not only mine? In this case is it better to be set to a
certain value or if i just didn't set it to any value does flink take care
of that generically and provide better execution performance?
Thanks.
Hello Suneel,
Yeah that worked, thanks so much.
On 16 March 2016 at 12:50, Suneel Marthi wrote:
> DataStream ds = ...
>
> Iterator iter = DataStreamUtils.collect(ds);
>
> List list = Lists.newArrayList(iterator);
>
> Hope that helps.
>
>
> On Wed, Mar 16
to be used somewhere else afterwards.
Thanks,
Ahmed
rallelism [1]?
> It explains how to set the parallelism on different levels: Cluster, Job,
> Task.
>
> Best, Fabian
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/common/index.html#parallel-execution
>
> 2016-03-18 13:34 GMT+01:00 Till Rohrmann
Hello,
I'm new to flink so this might seem a basic question. I added flink to an
existing project using maven and can run the program locally with
StreamExecutionEnvironment with no problems, however i want to know how can
I submit jobs for that project and be able to view these jobs from flink's
w
2016 at 12:48, Matthias J. Sax wrote:
> You need to download Flink and install it. Follow this instructions:
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html
>
> -Matthias
>
> On 04/16/2016 04:00 PM, Ahmed Nader wrote:
> >
Sorry the error is can't find the path specified*
On 17 April 2016 at 15:49, Ahmed Nader wrote:
> Thanks, I followed the instructions and when i try to start the web
> interface i get an error can't find file specified. I tried to change the
> env.java.home variable to the
interface"? The web
> interface is started automatically within the JobManager process.
>
> What is the exact error message. Is there any stack trace? Anny error in
> the log files (in directory log/)
>
> -Matthias
>
> On 04/17/2016 03:50 PM, Ahmed Nader wrote:
>
stream that way? And
then If I'm able to do so, is it possible to keep updating my collection
with the content in my datastream so far?
I hope I was able to make my question clear enough.
Thanks,
Ahmed
ogram, that would be great too.
Thanks,
Ahmed
threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
[tomcat-embed-core-8.0.32.jar:8.0.32] at
java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
And please note that the line where these exceptions point is the line that
I'm checking the condition if(iterator.hasNext())
Thanks,
Ah
that maintains a
set of Kafka producers? Or a custom SinkFunction that delegates the work to a
collection of FlinkKafkaProducer instances? Is there a better approach?
Thanks in advance.
Truly,Ahmed
needed.
On Tuesday, April 20, 2021, 01:11:08 PM PDT, Ejaskhan S
wrote:
Hi Ahmed,
If you want to dynamically produce events to different topics and you have the
logic to identify the target topics, you will be able to achieve this in the
following way.
- Suppose this is your
Hi,
I am trying to use IAM Roles with Service Accounts on Flink 1.12 running on
Kubernetes. Previously I was using KIAM to provide identification to the pods
and that works fine. However, when switching to use IRSA, I see the following
errors (posted below). Has anyone experienced a similar is
Hi all,
I have been searching around quite a bit and doing my own experiments to
make the latest Flink release 1.7.1 to work with Apache Zeppelin however
Apache Zeppelin's Flink interpreter is quite outdated (Flink 1.3). AFAIK
its not possible to use Flink running on YARN via Zeppelin as it only wo
Hello Flavio,
Thank you so much for replying, however I didn't download Flink locally, I
only added dependencies in a maven project. So i don't think I'll be able
to modify the KryoSerializer class. But yeah me too i think it's the
problem.
Thanks,
Ahmed
On 8 June 2
Hello All,
Hope you are doing well..
Myself Samim and I am working of POC(proof of concept) for a project. In
this project we are using Apache Flink to process the stream data and find
the required pattern and finally dump those patterns in DB.
So to implement this we have used the global w
modify to get the high
performance in terms of lese execution time. Thanks in advance
--
Regards,
Samim Ahmed
Mumbai
09004259232
400-4464-9318-a06ce049b0e0
Thanks in advance.
--
Regards,
Samim Ahmed
Mumbai
09004259232
Hi Lajith,
Could you please open the discussion thread against "d...@apache.flink.org",
I believe it is better suited there.
Best Regards
Ahmed Hamdy
On Wed, 27 Mar 2024 at 05:33, Lajith Koova wrote:
>
>
> Hello,
>
>
> Starting discussion thread here to discuss
I am sorry it should be "d...@flink.apache.org"
Best Regards
Ahmed Hamdy
On Wed, 27 Mar 2024 at 13:00, Ahmed Hamdy wrote:
> Hi Lajith,
> Could you please open the discussion thread against "d...@apache.flink.org",
> I believe it is better suited there.
> Best Re
enting a
custom process function to hold batches of records in state and emit an
aggregated record but this might not be consistent with KPL aggregation of
course and de-aggregated records could be not retrieved so I would advise
not to take this approach.
Best Regards
Ahmed Hamdy
On Mon, 29 Apr
Hi Kush
Unfortunately there is currently no real Redis connector maintained by the
Flink community. I am aware that Bahir's version might be outdated but we
are currently working on a community supported connector[1]
1-https://github.com/apache/flink-connector-redis-streams
Best Regards
Hi Aniket
The community is currently working on releasing a new version for all the
connectors that is compatible with 1.19. Please follow the announcements in
Flink website[1] to get notified when it is available.
1-https://flink.apache.org/posts/
Best Regards
Ahmed Hamdy
On Fri, 10 May 2024
https://github.com/apache/flink-connector-aws repo. One of the
committers should be able to review it then.
Best Regards
Ahmed Hamdy
On Fri, 31 May 2024 at 00:55, Rob Goretsky
wrote:
> Hello!
>
> I am looking to use the DynamoDB Table API connector to write rows to AWS
> DynamoDB.
, reprocessing is bound by the checkpoint interval which is 5
minutes, you can make it tighter if it suits your case better.
Best Regards
Ahmed Hamdy
On Thu, 18 Jul 2024 at 11:37, banu priya wrote:
> Hi All,
>
> Gentle reminder about bow query.
>
> Thanks
> Banu
>
> On T
periment with
different configurations for rocksdb compaction.
Best Regards
Ahmed Hamdy
On Thu, 18 Jul 2024 at 13:52, banu priya wrote:
> Hi Ahmed,
>
> Thanks for the clarification. I see from flink documentation that Kafka
> sinks are transactional and de duplication happ
release manager to pick up the 3.1 release.
1-https://issues.apache.org/jira/browse/FLINK-26088
2-https://lists.apache.org/list.html?d...@flink.apache.org
Best Regards
Ahmed Hamdy
On Wed, 31 Jul 2024 at 15:06, Rion Williams wrote:
> Hi again all,
>
> Just following up on this as I’v
/opt/flink/plugins/flink-s3-fs-hadoop/
```
1- https://mvnrepository.com/artifact/org.apache.flink/flink-s3-fs-hadoop
2- https://mvnrepository.com/artifact/org.apache.flink/flink-s3-fs-presto
Best Regards
Ahmed Hamdy
On Fri, 2 Aug 2024 at 00:05, Maxim Senin via user
wrote:
> When will Fl
Why do you believe it is an SSL issue?
The error trace seems like a memory issue. you could refer to
taskmanager memory setup guide[1].
1-
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/memory/mem_setup_tm/
Best Regards
Ahmed Hamdy
On Fri, 23 Aug 2024 at 13:47, John Smith
RollingPolicy
<https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/connectors/datastream/filesystem/#rolling-policy>
won’t
work.
Best Regards
Ahmed Hamdy
On Fri, 23 Aug 2024 at 11:48, Nicolas Paris
wrote:
> hi
>
> From my tests kafka sink in exactly-once and ba
Java's
maven/gradle dependency management systems.
1-
https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/filesystems/overview/#local-file-system
2-
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/python/dependency_management
Best Regards
Ahmed Hamdy
On M
://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/enumerator/KafkaSourceEnumerator.java#L286
Best Regards
Ahmed Hamdy
On Sun, 6 Oct 2024 at 06:41, Anil Dasari wrote:
> Hello,
> I have implemented a custom source that
ljdbc/DatabaseSplitReader.java
Best Regards
Ahmed Hamdy
On Sun, 6 Oct 2024 at 16:48, Anil Dasari wrote:
> Hi Ahmed,
> Thanks for the response.
> This is the part that I find unclear in the documentation and FLIP-27. The
> actual split assignment happens in the SplitEnumerator#handleSpli
1-
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#starting-offset
Best Regards
Ahmed Hamdy
On Tue, 18 Mar 2025 at 22:20, mejri houssem
wrote:
>
> Hello everyone,
>
> We have a stateless Flink job that uses a Kafka source with at-least-once
>
while if you use savepoints
you can redeploy from the last correct version savepoint and reprocess the
data that was processed by the buggy job.
Best Regards
Ahmed Hamdy
On Wed, 19 Mar 2025 at 00:54, mejri houssem
wrote:
> Hello Ahmed,
>
> Thanks for the response.
>
> Does that
Hi there,
https://stackoverflow.com/questions/52996054/resource-changed-on-src-filesystem-in-azure-flink
We are unable to start flink on Azure Hadoop cluster [on top of WASB]. This
throws:
Application application_1539730571763_0046 failed 1 times (global limit =5;
local limit is =1) due to AM
Hi,
We are submitting a Flink topology [YARN] and it fails during upload of the jar
with no error info.
[main] INFO org.apache.flink.runtime.client.JobClient - Checking and uploading
JAR files
[main] ERROR org.apache.flink.client.CliFrontend - Error while running the
command.
org.apache.fli
(sqrte n (/ (+ xn
(/ n xn)) 2) eph)))
From: Andrey Zagrebin
Date: Friday, July 19, 2019 at 10:36 AM
To: Fakrudeen Ali Ahmed
Cc: "user@flink.apache.org"
Subject: Re: Job submission timeout with no error info.
Hi Fakrudeen,
which Flink version do you use? could you share full client an
(> eph (abs (- n (* xn xn xn (sqrte n (/ (+ xn
(/ n xn)) 2) eph)))
From: Andrey Zagrebin
Date: Monday, July 22, 2019 at 8:52 AM
To: Fakrudeen Ali Ahmed
Cc: "user@flink.apache.org"
Subject: Re: Job submission timeout with no error info.
Hi Fakrudeen,
Thanks for sharing the lo
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:115)
... 46 more
Thanks,
-Fakrudeen
(define (sqrte n xn eph) (if (> eph (abs (- n (* xn xn xn (sqrte n (/ (+ xn
(/ n xn)) 2) eph)))
From: Fakrudeen Ali Ahmed
Date: Monday, July 22, 2019 at 9:08 AM
To: Andrey Zagrebin
pList //
- Remove existing entry from tripList
// Update state trips.update(tripList); }
Thank you in advance,Ahmed.
which allowed unit testing of the
Processor. I am looking for something similar for the stateless processor.
Any pointers or link to sample code would be greatly appreciated.
Regards,Ahmed.
45 matches
Mail list logo