Hi Billy,
Without looking into detail how batch api works. I thought filter approach
might not the most efficient in general to implement toplogy conditional
branching. Again, It may not answer your question in term of prof
improvement.
If you are willing to use streaming api, you might consider
When I start my flink application with a -p parallelism value of 24, 29
slots are used for the application. Is that expected behavior in some
scenarios?
My application reads in an event stream from Kafka. It does some filtering
and does a keyBy on the stream. Then it processes the same stream
I've implemented side outputs right now using an enum approach as recommended
be others. Basically I have a mapper which wants to generate 4 outputs (DATA,
INSERT, UPDATES, DELETE).
It emits a Tuple2 right now and I use a 4 following filters to
write each 'stream' to a different parquet file.
Dear flinkers,
I'm consuming from a kafka broker in a server that has ssl authentication
enabled? How do I config my consumer to compy with it?
Many thanks
Alex
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/kafka-ssl-how-to-enable-ssl-au
Update: I've now used 1.1.3 versions as in the example in the docs and it
works!
Looks like these is an incompatibility with the latest logback.
Best regards,
Dmitry
On Wed, Feb 8, 2017 at 3:20 PM, Dmitry Golubets wrote:
> Hi Robert,
>
> After reading that link I've added the missing log4j-over
Hi Robert,
After reading that link I've added the missing log4j-over-slf4j.
Now Flink is able to start, but I get another exception when uploading a
job:
NoClassDefFoundError: org/slf4j/event/LoggingEvent
...
at
org.apache.flink.api.scala.ClosureCleaner$.ensureSerializable(ClosureClean
Dear Apache Enthusiast,
This is your FINAL reminder that the Call for Papers (CFP) for ApacheCon
Miami is closing this weekend - February 11th. This is your final
opportunity to submit a talk for consideration at this event.
This year, we are running several mini conferences in conjunction with
t
Ok . I am right now simply taking a POJO to get the data types and
schema but needed generic approach to get these information.
Thanks
On 02/08/2017 01:37 PM, Flavio Pompermaier wrote:
I also thought about it and my conclusion was to use a generic sql
parser (e.g. Calcite?) to extract the col
I also thought about it and my conclusion was to use a generic sql parser
(e.g. Calcite?) to extract the column names from the input query (because
in the query you can rename/add fields...).. I'd like to hear opinions
about this..unfortunately I don't have the time to implement this right now
:(
HI
With this approach i will be able to get data types but not the column
names because TypeInformation typeInformation = dataStream.getType()
will return types but not the columns names.
Is there any other way to get the column names from Row?
Thanks
Punit
On 02/08/2017 10:17 AM, Chesnay
Hi Ekram,
if you are refering to
https://github.com/apache/flink/blob/d7b59d761601baba6765bb4fc407bcd9fd6a9387/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/math/SparseMatrix.scala
You could first convert it to a Breeze Matrix:
https://github.com/apache/flink/blob/1753b1d25b4d943c1f
Hello,
in the JDBC case i would suggest that you extract the schema from the
first Row that your sink receives, create the table, and then
start writing data.
However, keep in mind that Rows can contain null fields; so you may not
be able to extract the entire schema if the first
row has a n
Hi Chesnay
Currently that is what i have done, reading the schema from database in
order to create a new table in jdbc database and writing the rows coming
from jdbcinputformat.
Overall i am trying to implement the solution which reads the streaming
data from one source which either could be
Thanks Robert and Ufuk for the update.
2017-02-07 18:43 GMT+01:00 Robert Metzger :
> I've filed a JIRA for the issue: https://issues.apache.
> org/jira/browse/FLINK-5736
>
> On Tue, Feb 7, 2017 at 5:00 PM, Robert Metzger
> wrote:
>
>> Yes, I'll try to fix it asap. Sorry for the inconvenience.
>>
Hi Anton,
The list hasn't been updated since September 2015. I'll put a big warning
note on top of the page :)
On Wed, Feb 8, 2017 at 8:25 AM, Anton Solovev
wrote:
> Ho Robert,
>
>
>
> I thought this list https://cwiki.apache.org/
> confluence/display/FLINK/List+of+contributors would be updated
Hello,
I don't understand why you explicitly need the schema since the batch
JDBCInput-/Outputformats don't require it.
That's kind of the nice thing about Rows.
Would be cool if you could tell us what you're planning to do with the
schema :)
In any case, to get the schema within the plan t
16 matches
Mail list logo