Hi All,
I read a couple of article about Kappa and Lambda Architecture.
http://www.confluent.io/blog/real-time-stream-processing-the-next-step-for-apache-flink/
I'm convince that Flink will simplify this one with streaming.
However i also stumble upon this blog post that has valid argument to h
Hi!
Rather than taking an 0.10-SNAPSHOT, you could also take a 0.10 release
candidate.
The latest is for example in
https://repository.apache.org/content/repositories/orgapacheflink-1053/
Greetings,
Stephan
On Mon, Nov 9, 2015 at 5:45 PM, Maximilian Michels wrote:
> Hi Brian,
>
> We are curr
Hi Brian,
We are currently in the process of releasing 0.10.0. Thus, the master
version has already been updated to 1.0 which is the next scheduled
release.
If you want to use the latest SNAPSHOT version, you may build it from
source or use the SNAPSHOT Maven artifacts. For more information,
plea
On Mon, Nov 9, 2015 at 9:59 AM, Brian Chhun
wrote:
> Is there a way to download 0.10 SNAPSHOT package like what' s available
> for 0.9.1? The downloads page on http://flink.apache.org/ seems to only
> have up to 0.9.1, despite having documentation for "0.10 SNAPSHOT" (the
> documentation link als
Hi everyone,
I am considering using Flink in a project. The setting would be a YARN cluster
where data is first read in from HDFS, then processed and finally written into
an Oracle database using an upsert command. If I understand the documentation
correctly, the DataSet API would be the natura
Great to hear you sorted things out. Looking forward to the pull request!
On Mon, Nov 9, 2015 at 4:50 PM, Stephan Ewen wrote:
> Super nice to hear :-)
>
>
> On Mon, Nov 9, 2015 at 4:48 PM, Niels Basjes wrote:
>>
>> Apparently I just had to wait a bit longer for the first run.
>> Now I'm able to
Super nice to hear :-)
On Mon, Nov 9, 2015 at 4:48 PM, Niels Basjes wrote:
> Apparently I just had to wait a bit longer for the first run.
> Now I'm able to package the project in about 7 minutes.
>
> Current status: I am now able to access HBase from within Flink on a
> Kerberos secured cluste
Apparently I just had to wait a bit longer for the first run.
Now I'm able to package the project in about 7 minutes.
Current status: I am now able to access HBase from within Flink on a
Kerberos secured cluster.
Cleaning up the patch so I can submit it in a few days.
On Sat, Nov 7, 2015 at 10:01
The distributed "start-cluster.sh" script works only, if the code is
accessible under the same path on all machines, which must be the same path
as on the machine where you invoke the script.
Otherwise the paths for remote shell commands will be wrong, and the
classpaths will be wrong as a result.
But the default wordcount example in which flink is accessing hadoop runs?
Or is that something different?
Am 09.11.2015 11:54 schrieb "Maximilian Michels" :
> Hi Thomas,
>
> It appears Flink couldn't pick up the Hadoop configuration. Did you
> set the environment variables HADOOP_CONF_DIR or HADO
Hello,
I am configuring Flink to run on a cluster with NFS.
I have the Flink 0.9.1 distribution in some path in NFS and I added that path
in ~/.bashrc as FLINK_HOME, and also included the $FLINK_HOME/lib folder to
$PATH.
I have the slaves file and the yaml file configured correctly with the node
Why don't you use a composite key for the Flink join
(first.join(second).where(0,1).equalTo(2,3).with(...)?
This would be more efficient and you can omit the check in the join
function.
Best, Fabian
2015-11-08 19:13 GMT+01:00 Philip Lee :
> I want to join two tables with two columns like
>
> //
Hello,
thanks for the answer but windows produce periodical results. I used your
example but the data source is changed to TCP stream:
DataStream text = env.socketTextStream("localhost", 2015,
'\n');
DataStream> wordCounts =
text
.flatMap(new Line
Hi,
yes please, open an Issue for that. I think the method would have to be added
to TableEnvironment.
Aljoscha
> On 09 Nov 2015, at 12:19, Johann Kovacs wrote:
>
> Hi,
> thanks for having a look at this, Aljoscha.
>
> Not being able to read a DataSet[Row] from csv is definitively the most maj
Hi,
thanks for having a look at this, Aljoscha.
Not being able to read a DataSet[Row] from csv is definitively the most
major issue for me right now.
Everything else I could work around with Scala magic. I can create an issue
for this if you'd like.
Regarding the other points:
1. Oh absolutely, t
Hi Thomas,
It appears Flink couldn't pick up the Hadoop configuration. Did you
set the environment variables HADOOP_CONF_DIR or HADOOP_HOME?
Best,
Max
On Sun, Nov 8, 2015 at 7:52 PM, Thomas Götzinger wrote:
> Sorry for Confusing,
>
> the flink cluster throws following stack trace..
>
> org.apac
These reason is that the non-rich function interfaces are SAM (single
abstract method) interfaces.
In Java 8, SAM interfaces can be specified as concise lambda functions.
Cheers, Fabian
2015-11-09 10:45 GMT+01:00 Flavio Pompermaier :
> Hi flinkers,
> I have a simple question for you that I want
Hi flinkers,
I have a simple question for you that I wanted to ask since the beginning
of Flink: is there really the need to separate Rich and normal operators?
Why not keeping just the rich version?
Best,
Flavio
Hi!
If you want to work on subsets of streams, the answer is usually to use
windows, "stream.keyBy(...).timeWindow(Time.of(1, MINUTE))".
The transformations that you want to make, do they fit into a window
function?
There are thoughts to introduce something like global time windows across
the en
19 matches
Mail list logo