Hi guys,
I am trying to test a job that should run a number of tasks to read from a
RDBMS using an improved JDBC connector. The connection and the reading run
smoothly, but I cannot seem to be able to move above the limit of 8
concurrent threads running. 8 is of course the number of cores of my
ma
> On Wed, 23.03.2016 06:59, Chesnay Schepler wrote
> Could you be missing the call to execute()?
Yes, that was it. Can't believe I missed that !
Thank you Chesnay.
Best,
Tarandeep
On 23.03.2016 01:25, Tarandeep Singh wrote:
>> Hi,
>>
>> I wrote a simple Flink job that uses Avro input format to
All:
Does Flink DataSet have a randomSplit(weights:Array[Double], seed: Long):
Array[DataSet[T]] function?
There is this pull request: https://github.com/apache/flink/pull/921
Does anyone have an update of the progress of this?
Thank you.
--
*Gna Phetsarath*System Architect // AOL Platforms
Great! I will, once I clear it with the legal team here.
On Wed, Mar 23, 2016 at 6:19 AM, Ufuk Celebi wrote:
> Nice! Would you like to contribute this to Flink via a pull request? Some
> resources about the contribution process can be found here:
>
> http://flink.apache.org/contribute-code.html
Never mind I understand what is going on Aljoscha for each unique key the
value count is reset to 0.
On Wed, Mar 23, 2016 at 4:37 PM, Balaji Rajagopalan <
balaji.rajagopa...@olacabs.com> wrote:
> (Booking(te7uc4,compact,j...@gmail.com,Mon Feb 29 19:19:40 IST
> 2016),145873098,145873104)cu
(Booking(te7uc4,compact,j...@gmail.com,Mon Feb 29 19:19:40 IST
2016),145873098,145873104)current booking count 1
(Booking(tdr1ym,compact,er...@gmail.com,Mon Feb 29 18:41:07 IST
2016),145873098,145873104)current booking count 1
(Booking(t9zvqw,compact,yas...@gmail.com,Mon Feb 29 19:1
Hi,
what is the input for each of those outputs? Could you maybe print this:
System.out.println(in + “, current booking count "+value)
Also, what is the key that you specify for your KeyedStream?
Cheers,
Aljoscha
> On 23 Mar 2016, at 11:53, Balaji Rajagopalan
> wrote:
>
> I wrote the belo
I wrote the below code which will increment a counter for the data in the
datastream, and when I print the counter each time it seems the value is
reinitialised to 0, and it is not incrementing, any thoughts.
class BookingCntFlatMapFunction extends
RichFlatMapFunction[(Booking,Long,Long),(Booking,
Nice! Would you like to contribute this to Flink via a pull request? Some
resources about the contribution process can be found here:
http://flink.apache.org/contribute-code.html
http://flink.apache.org/how-to-contribute.html
On Wed, Mar 23, 2016 at 12:00 AM, Fabian Hueske wrote:
> Hi Gna,
>
>
Thanks for the clarification.
case java.sql.Types.DECIMAL:
reuse.setField(resultSet.getBigDecimal(pos +
1).doubleValue(), pos);
break;
this causes both a nullpointer on null values as well as a double class
cast exception when serializing the tuple.
For th
On 23.03.2016 10:38, Chesnay Schepler wrote:
On 23.03.2016 10:04, Stefano Bortoli wrote:
I had a look at the JDBC input format, and it does indeed interpret
BIGDECIMAL and NUMERIC values as double.
This sounds more like a bug actually. Feel free to open a JIRA for this.
Actually, this was done
On 23.03.2016 10:04, Stefano Bortoli wrote:
I had a look at the JDBC input format, and it does indeed interpret
BIGDECIMAL and NUMERIC values as double.
This sounds more like a bug actually. Feel free to open a JIRA for this.
The status of the JDBCInputFormat is not adequate for real world use
Hi,
the output at 19:44:44.635 is indeed strange. Is this reproducible?
As for the removal of windows. That is a pitfall a lot of users have fallen
into. The timeWindowAll() call just sets up a window assigner, so in your case
the equivalent call would be:
.flatMap { _.toLowerCase.split("\
I had a look at the JDBC input format, and it does indeed interpret
BIGDECIMAL and NUMERIC values as double. The status of the JDBCInputFormat
is not adequate for real world use case, as for example does not deal with
NULL values.
However, with little effort we fixed few stuff and now we are getti
Hi,
have you tried clearing your m2 repository? It would also be helpful to see
your dependencies (pom.xml).
Cheers,
Till
On Tue, Mar 22, 2016 at 10:41 PM, Sharma, Samiksha wrote:
> Hi,
>
> I am converting a storm topology to Flink-storm topology using the
> flink-storm dependency. When I run
Hi Mengqi,
if what you are trying to do is output the solution set of every iteration,
before the iteration has finished, then that is not possible.
i.e. you can not output the solution set to a sink or another operator
during the iteration.
However, you can add elements to the solution set and g
16 matches
Mail list logo