Hi Philip,
As far as I know, ganglia[1] is widely used for monitoring Hadoop ecosystem. If
your Hadoop distribution is installed by Apache Ambari[2], you can fetch
metrics easily from pre-installed ganglia.
[1]: http://ganglia.sourceforge.net
[2]: https://ambari.apache.org
> On Dec 8, 2015, at
Hi Radu,
you are right. The open() method is called for each parallel instance of
a rich function. Thus, if all instanced use the same code, you might
read the same data multiple times.
The easiest was to distinguish different instanced within open() is to
user the RuntimeContext. If offers two m
Hi,
Taking the example you mentioned of using RichFlatMapFunction and in the open()
reading a file.
Would this open function be executed on each node where the RichFlatMapFunction
gets executed? (I have done some tests and I would get the feeling it does –
but I wanted to double - check )
If so
Hi Matthias, Sorry for the confusion. I just used a simple code in the
Count Bolt to write the bolt output into a file and was not using
BiltFileSink.
OutputStream o;
try {
o = new FileOutputStream("/tmp/wordcount.txt", true);
o.write((word + " " + count.toString() + "\n").getBytes());
Hello,
I've implemented a (streaming) flow using the Java API and Java8 Lambdas
for various map functions. When I try to run the flow, job submission fails
because of an unserializable type. This is not a type of data used within
the flow, but rather a small collection of objects captured in the c
Hello, a question about metrics.
I want to evaluate some queris on Spark, Flink, and Hive for a comparison.
I am using 'vmstat' to check metrics to see the amount of memory used,
swap, io, cpu. My way of evaulating is right? becaues they use JVM's
resource for memory, cpu.
Is there any linux app
Forgot to mention. I've checked it both on 0.10 and current master.
2015-12-07 20:32 GMT+01:00 Dawid Wysakowicz :
> Hi,
>
> I have recently experimented a bit with windowing and event-time mechanism
> in flink and either I do not understand how should it work or there is some
> kind of a bug.
>
>
Hi,
I have recently experimented a bit with windowing and event-time mechanism
in flink and either I do not understand how should it work or there is some
kind of a bug.
I have prepared two Source Functions. One that emits watermark itself and
one that does not, but I have prepared a TimestampExt
Sorry, correcting myself:
The ClosureCleaner uses Kryo's bundled ASM 4 without any reason - simply
adjusting the imports to use the common ASM (which is 5.0) should do it ;-)
On Mon, Dec 7, 2015 at 8:18 PM, Stephan Ewen wrote:
> Flink's own asm is 5.0, but the Kryo version used in Flink bundles
Flink's own asm is 5.0, but the Kryo version used in Flink bundles
reflectasm with a dedicated asm version 4 (no lambdas supported).
Might be as simple as bumping the kryo version...
On Mon, Dec 7, 2015 at 7:59 PM, Cory Monty
wrote:
> Thanks, Max.
>
> Here is the stack trace I receive:
>
> ja
Thanks, Max.
Here is the stack trace I receive:
java.lang.IllegalArgumentException:
at
com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.(Unknown
Source)
at
com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.(Unknown
Source)
at
com.esotericsoftware.reflectasm
For completeness, could you provide a stack trace of the error message?
On Mon, Dec 7, 2015 at 6:56 PM, Maximilian Michels wrote:
> Hi Cory,
>
> Thanks for reporting the issue. Scala should run independently of the
> Java version. We are already using ASM version 5.0.4. However, some
> code uses
Hi Cory,
Thanks for reporting the issue. Scala should run independently of the
Java version. We are already using ASM version 5.0.4. However, some
code uses the ASM4 op codes which don't seem to be work with Java 8.
This needs to be fixed. I'm filing a JIRA.
Cheers,
Max
On Mon, Dec 7, 2015 at 4:
Hi,
Can you try to describe what is planned for the future releases and eventually
link the Jira issues/bugs to it?
Some very important features have a Major priority, like:
[1] Add a SQL API (on top of Table API)
[2] Add KMeans clustering algorithm to ML Library (kmeans ++ & ||)
[3] Create eva
Is it possible to use Scala 2.11 and Java 8?
I'm able to get our project to compile correctly, however there are runtime
errors with the Reflectasm library (I'm guessing due to Kyro). I looked
into the error and it seems Spark had the same issue (
https://issues.apache.org/jira/browse/SPARK-6152,
Thanks!
I should've mentioned that I've seen the FAQ but I didn't notice intellij
deleting the import immediately.
For anyone encountering a similar behavior:
http://stackoverflow.com/questions/11154912/how-to-prevent-intellij-idea-from-deleting-unused-packages
Note that you have uncheck "opitimiz
Hey Alex,
Try adding the following import:
import org.apache.flink.streaming.api.scala._
This adds all the implcit utilities that Flink needs to determine type info.
Best,
Marton
On Mon, Dec 7, 2015 at 10:24 AM, lofifnc wrote:
> Hi,
>
> I'm getting an error when using .fromElements() of the
Have a look here:
http://flink.apache.org/faq.html#in-scala-api-i-get-an-error-about-implicit-values-and-evidence-parameters
On Mon, Dec 7, 2015 at 10:24 AM, lofifnc wrote:
> Hi,
>
> I'm getting an error when using .fromElements() of the
> StreamExecutionEnvironment or ExectutionEnvironment of t
Hi,
I'm getting an error when using .fromElements() of the
StreamExecutionEnvironment or ExectutionEnvironment of the scala api:
Error:(53, 54) could not find implicit value for evidence parameter of type
org.apache.flink.api.common.typeinfo.TypeInformation[Int]
val source : DataSet[Int] = en
> shouldn't be better to have both connectors for ES?one for 1.x and another
> for 2.x?
IMHO that's the way to go.
Thanks Madhukar!
Cheers,
Max
On Sat, Dec 5, 2015 at 6:49 AM, Deepak Sharma wrote:
> Hi Madhu
> Would you be able to provide the use case here in ElasticSearch with Flink?
>
> Th
20 matches
Mail list logo