to find bug fixes that are going into 1.4.x, say 1.4.3, you can filter
jira tickets with 'Fix Versions' as '1.4.3'
On Thu, Apr 19, 2018 at 1:36 AM, Daniel Harper
wrote:
> Hi there,
>
> There are some bug fixes that are in the 1.4 branch that we would like to
> be made available for us to use.
one of my task manager is out ot the cluster and I checked its log found
something below:
2018-04-19 22:34:47,441 INFO org.apache.flink.runtime.taskmanager.Task
- Attempting to fail task externally Process (115/120)
(19d0b0ce1ef3b8023b37bdfda643ef44).
2018-04-19 22:34:47,441 I
So I have events coming in in this format
TransactionID, MicroserviceName, Direction,EpochTimeStamp.
For each call to a microservice and event is generates with a timestamp with
direction of "in". Then when completed it generates with a timestamp with
direction of "out".
I need to calculate laten
Thanks for your kind reply.
But I still have some question. What does the logging level mean in your
system? Why do you need to re-deploy the cluster to change the logging level?
As far as I know, the debugging information can be divided into level like
info, warn, error, etc. Is these informa
That sounds nice. But the lazy evaluation feature of Flink seems to cause the
debug process more different with a regular Java/Scala application. Do you know
how to debug a Flink application, e.g., tracing some local variables, in IDE?
Best,
Stephen
> On Apr 19, 2018, at 6:29 AM, Fabian Hueske
Hi Amit,
Thank you for the follow up. What you describe sounds like a bug but I am
not
able to reproduce it. Can you open an issue in Jira with an outline of your
code
and how you submit the job?
> Could you also recommend us the best practice in FLIP6, should we use
YARN session or submit jobs i
Hi Sebastien,
You need to add the dependency under a “dependencies” section, like so:
…
Then it should be working.
I would also recommend using the Flink quickstart Maven templates [1], as they
already have a well defined Maven project skeleton for Flink jobs.
Cheers,
Gordon
[1]
h
Hi Pedro,
You can try to call either
.rebalance() or|.shuffle()|
||
|before the Async operator. Shuffle might give a better result if you
have fewer tasks than parallelism. Best regards, Kien |
On 4/18/2018 11:10 PM, PedroMrChaves wrote:
Hello,
I have a job that has one async operational
Hi,
Our most useful tool when debugging Flink is actually the simple log
files, because debugger just slow things down too much for us.
However, having to re-deploy the entire cluster to change the logging
level is a pain (we use YARN),
so we would really like an easier method to change the
Hi Guys,
I have created a project with Maven to try to send data from Kafka to
Flink. But when i try to build the project i have the following error :
*[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile
(default-compile) on project processing-app: Compilatio
Hi,
That tool only shows active consumer-groups that make use of the
automatic partitions assignment API.
Flink use the manual partitions assignment API, so it will now show up
there.
The best way to monitor kafka offset with Flink is using Flink's own
metrics system.
Otherwise, you can
@Alexander
Sorry about that, that would be my mistake. I’ll close FLINK-9204 as a
duplicate and leave my thoughts on FLINK-9155. Thanks for pointing out!
On 19 April 2018 at 2:00:51 AM, Elias Levy (fearsome.lucid...@gmail.com) wrote:
Either proposal would work. In the later case, at a minimum
Pardon - I missed the implicit config (which is used by withRunningKafka).
Without your manual message production, was there any indication in broker
log that it received message(s) ?
Thanks
On Thu, Apr 19, 2018 at 6:31 AM, Chauvet, Thomas
wrote:
> Hi,
>
>
>
> withRunningKafka launch a kafka b
In your case a FlatMapFunction is better suited because it allows 0, 1
or more output.
It would look like this:
text.flatMap((FlatMapFunction) (value, out) ->parse(value));
Or with an anonymous class:
text.flatMap(new FlatMapFunction() {
@Override public void flatMap(String value, Collect
Hi,
withRunningKafka launch a kafka broker. This is one of the advantage of this
library.
I test to consume / produce messages with kafka command line, and it seems
alright.
Thanks
De : Ted Yu [mailto:yuzhih...@gmail.com]
Envoyé : jeudi 19 avril 2018 15:28
À : Chauvet, Thomas
Objet : Re: Fli
Hi,
You can run Flink applications locally in your IDE and debug a Flink
program just like a regular Java/Scala application.
Best, Fabian
2018-04-19 0:53 GMT+02:00 Qian Ye :
> Hi
>
> I’m wondering if new debugging methods/tools are urgent for Flink
> development. I know there already exists so
Hi,
I would like to < unit test > a job flink with Kafka as source (and Sink). I am
trying to use the library scalatest-embedded-kafka to simulate a Kafka for my
test.
For example, I would like to get data (string stream) from Kafka, convert it
intro uppercase and put it into another topic.
N
Hi Soheil,
Flink supports the type "java.lang.Void" which you can use in this case.
Regards,
Timo
Am 19.04.18 um 15:16 schrieb Soheil Pourbafrani:
Hi, I have a void function that takes a String, parse it and write it
into Cassandra (Using pure java, not Flink Cassandra connector). Using
Apac
Hi, I have a void function that takes a String, parse it and write it into
Cassandra (Using pure java, not Flink Cassandra connector). Using Apache
Flink Kafka connector, I've got some data into DataStream. Now I
want to apply Parse function to each message in DataStream, but as
the Parse function
It is evident that if I have n windows and subsequent aggregate, on a keyed
stream, it is a n way hash increasing as in n copies of the stream. Is
there a reason why that is the direction flink has gone rather than one
hash operator localized specific window/subsequent operations.
Hi Gary,
We found the underlying issue with the following problem.
Few of our jobs are stuck with logs [1], these jobs are only able to
allocate JM and couldn't get any TM, however, there are ample resource on
our cluster.
We are running ETL merge job here. In this job, we first find new deltas
a
Hi
We are using Kafka 0.11 consumers with Flink 1.4 and Confluence Kafka 4.0.0.
Checkpointing is enabled and enableCommitOnCheckpoints ist set to true.
However there are no offsets from Flink jobs visible in Kafka when checking
with the kafka-consumer-groups tool.
Any ideas
Regards
Bernd
Hi there,
There are some bug fixes that are in the 1.4 branch that we would like to be
made available for us to use.
Is there a roadmap from the project when the next stable 1.4.x release will be
cut? Any blockers?
Hi Teena,
I'd go with approach 2. The performance difference shouldn't be significant
compared to 1. but it is much easier to implement, IMO.
Avoid approach 3. It will be much slower because you need at least one call
to an external data store and more difficult to implement.
Flink's checkpointin
24 matches
Mail list logo