Schepler
wrote:
> Do you see anything in the logs? In another thread a user reported that
> the datadog reporter could stop working when faced with a large number of
> metrics since datadog was rejecting the report due to being too large.
>
> On 15/03/2020 12:22, Yitzchak Lieberman wr
Anyone?
On Wed, Mar 11, 2020 at 11:23 PM Yitzchak Lieberman <
yitzch...@sentinelone.com> wrote:
> Hi.
>
> Did someone encountered problem with sending metrics with datadog http
> reporter?
> My setup is flink version 1.8.2 deployed on k8s with 1 job manager and 10
>
Hi.
Did someone encountered problem with sending metrics with datadog http
reporter?
My setup is flink version 1.8.2 deployed on k8s with 1 job manager and 10
task managers.
Every version deploy I see metrics on my dashboard but after a few minutes
its stopped being sent from all task managers whi
tly would you prefer? Without the stream name and shard id you'd
> end up with name clashes all over the place.
>
> Why can you not aggregate them? Surely Datadog supports some way to define
> a wildcard when definying the tags to aggregate.
>
> On 03/10/2019 09:09, Yitzchak Liebe
Hi.
I would like to have the ability to control the metric group of flink
kinesis consumer:
As written below it creates metric identifier for each stream name and
shard id (in our case more than 1000 metric identifiers), in such matter it
cannot be aggregated in data dog graph
private static Shar
What is the topic replication factor? how many kafka brokers do you have?
I were facing the same exception when one of my brokers was down and the
topic had no replica (replication_factor=1)
On Sun, Aug 25, 2019 at 2:55 PM Eyal Pe'er wrote:
> BTW, the exception that I see in the log is: ERROR
>
Hi.
Turned out that the cause was non-replicated (replication factor = 1)
topics in Kafka.
On Wed, Jul 24, 2019 at 4:20 PM Yitzchak Lieberman <
yitzch...@sentinelone.com> wrote:
> Hi.
>
> Do we have an idea for this exception?
>
> Thanks,
> Yitzchak.
>
> On Tue, J
might be interesting to know.
>
> Maybe Gordon (in CC) has an idea of what's going wrong here.
>
> Best, Fabian
>
> Am Di., 23. Juli 2019 um 08:50 Uhr schrieb Yitzchak Lieberman <
> yitzch...@sentinelone.com>:
>
>> Hi.
>>
>> Another question - what
Hi.
Another question - what will happen during a triggered checkpoint if one of
the kafka brokers is unavailable?
Will appreciate your insights.
Thanks.
On Mon, Jul 22, 2019 at 12:42 PM Yitzchak Lieberman <
yitzch...@sentinelone.com> wrote:
> Hi.
>
> I'm running a Flink
Hi.
I'm running a Flink application (version 1.8.0) that
uses FlinkKafkaConsumer to fetch topic data and perform transformation on
the data, with state backend as below:
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5_000, Ch
org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 6 ms.
On Thu, Jul 18, 2019 at 3:49 PM miki haiat wrote:
> Can you share your logs
>
>
> On Thu, Jul 18, 2019 at 3:22 PM Yitzchak Lieberman <
> yitzch...@sentinelone.com> wrote:
>
>
Hi.
I have flink a application that produces to kafka with 3 brokers.
When I add 2 brokers that are not up yet it fails the checkpoint (a key in
s3) due to timeout error.
Do you know what can cause that?
Thanks,
Yitzchak.
regarding option 2 for parquet:
implementing bucket assigner won't set the file name as getBucketId()
defined the directory for the files in case of partitioning the data,
for example:
/day=20190101/part-1-1
there is an open issue for that:
https://issues.apache.org/jira/browse/FLINK-12573
On Tue,
Hi.
I'm using the StreamingFileSink for writing partitioned data to s3.
The code is below:
StreamingFileSink sink =
StreamingFileSink.forBulkFormat(new Path("s3a://test-bucket/test"),
ParquetAvroFactory.getParquetWriter(schema, "GZIP"))
.withBucketAssigner(new PartitionBucketAssigner(
ve the explicit
> hdfs://xxx protocol.
>
> Another is that you’re in classpath hell, and your job jar contains an
> older version of Hadoop jars.
>
> — Ken
>
>
> On Jun 11, 2019, at 12:16 AM, Yitzchak Lieberman <
> yitzch...@sentinelone.com> wrote:
>
> Hi
Hi.
I'm a bit confused:
When launching my flink streaming application on EMR release 5.24 (which
have flink 1.8 version) that write Kafka messages to s3 parquet files i'm
getting the exception below, but when i'm installing flink 1.8 on EMR
custom wise it works.
What could be the difference beh
16 matches
Mail list logo