Yes. Please check that. If it's the nested type's problem, this might be a
bug.
On Mon, Feb 25, 2019, 21:50 Karl Jin wrote:
> Do you think something funky might be happening with Map/Multiset types?
> If so how do I deal with it (I think I can verify by removing those columns
> and retry?)?
>
>
jaym...@amazon.com
I’m not sure that approach will work for me, as I have many sessions going at
the same time which can overlap. Also, I need to be able to have sessions time
out if they never receive an end event. Do you know directly if setting a timer
triggers when any timestamp passes that time, or when the w
Do you think something funky might be happening with Map/Multiset types? If
so how do I deal with it (I think I can verify by removing those columns
and retry?)?
On Mon, Feb 25, 2019 at 6:28 PM Karl Jin wrote:
> Thanks for checking in quickly,
>
> Below is what I got on printSchema on the two ta
This is more a Java question than Flink per se. But I believe you need to
specify the rounding mode because it is calling longValueExact. If it just
called longValue it would have worked without throwing an exceptionbut
you risk overflowing 64 bits and getting a totally erroneous answer.
Ar
Hi Experts,
There is a Flink table which has a column typed as
java.math.BigDecimal, then in SQL I try to cast it to type long,
cast(duration as bigint)
however it throws the following exception:
java.lang.ArithmeticException: Rounding necessary
at java.math.BigDec
Hi Andrew,
> I have an “end session” event that I want to cause the window to fire
and purge.
Do you want to fire the window only by the 'end session' event? I see one
option to solve the problem. You can use a tumbling window(say 5s) and set
your timestamp to t‘+5s each time receiving an 'end se
Hi Experts,
There is a Flink table which has a column typed as
java.math.BigDecimal, then in SQL I try to cast it to type long,
cast(duration as bigint)
however it throws the following exception:
java.lang.ArithmeticException: Rounding necessary
at java.math.BigDec
Hi Flink Mailing List,
Long story short - I want to somehow collapse watermarks at an operator
across keys, so that keys with dragging watermarks do not drag behind.
Details below:
---
I have an application in which I want to perform the follow sequence of
steps: Assume my data is made up of dat
Hi all,
We're running Flink on a standalone five node cluster. The /tmp/ directory
keeps filling with directories starting with blobstore--*. These directories
are very large (approx 1 GB) and fill up the space very quickly and the jobs
fail with a No space left of device error. The files in th
Thanks for checking in quickly,
Below is what I got on printSchema on the two tables (left joining the
second one to the first one on uc_pk = i_uc_pk). rowtime in both are
extracted from the string field uc_update_ts
root
|-- uc_pk: String
|-- uc_update_ts: String
|-- rowtime: TimeIndicatorTyp
When running Flink 1.7 on EMR 5.21 using StreamingFileSink we see
java.lang.UnsupportedOperationException: Recoverable writers on Hadoop are only
supported for HDFS and for Hadoop version 2.7 or newer. EMR is showing Hadoop
version 2.8.5. Is anyone else seeing this issue?
Hello,
I’m trying to implement session windows over a set of connected streams (event
time), with some custom triggering behavior. Essentially, I allow very long
session gaps, but I have an “end session” event that I want to cause the window
to fire and purge. I’m assigning timestamps and water
Hi Karl,
It seems that some field types of your inputs were not properly extracted.
Could you share the result of `printSchema()` for your input tables?
Best,
Xingcan
> On Feb 25, 2019, at 4:35 PM, Karl Jin wrote:
>
> Hello,
>
> First time posting, so please let me know if the formatting isn
Hello,
First time posting, so please let me know if the formatting isn't correct,
etc.
I'm trying to left join two Kafka sources, running 1.7.2 locally, but
getting the below exception. Looks like some sort of query optimization
process but I'm not sure where to start investigating/debugging. I s
Hi Andrew,
when you implement your own Trigger or customize an existing Trigger you
get access to the TriggerContext in all callbacks of the Trigger interface.
The TriggerContext allows you to register custom metrics via
TriggerContext:getMetricGroup() and you can use partitioned state scoped
to
Trying to build a new Docker image by replacing 1.6.3 with 1.6.4 in
the Dockerfile found here (
https://github.com/docker-flink/docker-flink ), but seems to require a
new signing key, Is it available somewhere?
Getting
+ wget -nv -O flink.tgz.asc
https://www.apache.org/dist/flink/flink-1.6.4/fli
Hi Sohi,
There seems to be no avro implementations of Encoder interface used in
StreamingFileSink but maybe it could be implemented based
on AvroKeyValueWriter with not such a big effort.
There is also a DefaultRollingPolicy which is based on time and number of
records. It might create a temporar
Hi all,
I am getting similar exception while upgrading from Flink 1.4 to 1.6:
```
06 Feb 2019 14:37:34,080 ERROR
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error
occurred in the cluster entrypoint.
java.lang.RuntimeException: org.apache.flink.util.FlinkException: Could not
retr
Hi Erik,
I am still not able to understand reason behind this exception.
Is this exception causing failure and restart of job ? or This is occurring
after failure/restart is triggered .
Thanks
Sohi
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Andrey,
I am using AvroSinkWriter (with Bucketing Sink) with compression enabled .
Looks like StreamingFileSink does not have direct support for
AvroSinkWriter. Sequence File Format is there for StreamingFileSink , but
looks like it roll files on every checkpoint (OnCheckpointRollingPolicy)
wh
Hi All,
Due to the way our code is structured, we would like to use the broadcast
state at multiple points of our pipeline. So not only share it between
multiple instances of the same operator but also between multiple
operators. See the image below for a simplified example.
Flink does not seem t
Hi Andrew,
Just to add the Rong's answer, if you use RocksDB state backend, you can
activate state metrics forwarded from RocksDB [1].
Best,
Andrey
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/metrics.html#rocksdb
On Thu, Feb 21, 2019 at 11:22 PM Rong Rong wrote:
> Hi
Thanks Andrey .
Yeah will upgrade and see if same gets reproduced .
-Sohi
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Sohi,
I would also recommend trying the newer StreamingFileSink which is
available in Flink 1.7.x [1].
Best,
Andrey
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/streamfile_sink.html
On Sun, Feb 24, 2019 at 4:14 AM sohimankotia wrote:
> Hi Erik,
>
> Are you
25 matches
Mail list logo