Without indexRequest ES2 throws `document does not exit exception`.
Based on
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-docs-update.html#java-docs-update-api-upsert
the upsert works althrough not sure it's the best way.
return new UpdateRequest()
Hi Zach,
For using upsert in ES2, I guess it looks like as follows? However I cannot
find which method in Request returns UpdateRequest while
Requests.indexRequest() returns IndexRequest. Can I ask did you know it?
public static UpdateRequest updateIndexRequest(String element) {
Map json
Thank for your explanation.
Yes the InetSocketAddress you used is imported from java.net instead of
elaticsearh2. Very cool!
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/InetSocketAddress-is-not-serializable-when-building-ElasticSearch2-c
Can I ask why List can become serilizable?
Best,
Sendoh
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/InetSocketAddress-is-not-serializable-when-building-ElasticSearch2-connector-tp5296p5299.html
Sent from the Apache Flink User Mailing L
Thank you. Very nice usage and It works!
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/InetSocketAddress-is-not-serializable-when-building-ElasticSearch2-connector-tp5296p5298.html
Sent from the Apache Flink User Mailing List archive. maili
Hi,
I'm building the connector for ElasticSearch2. One main issue for me now is
that
List transports = new ArrayList();
transports.add(new InetSocketTransportAddress(new
InetSocketAddress(TransportAddress, 9300)));
throws
java.io.NotSerializableException:
org.elasticsearch.common.transport.Inet
Ah! My incorrect code segment made the Watermark not going forward and always
stay at the same moment in the past. Is that true and the issue?
Cheers,
Hung
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/The-way-to-itearte-instances-in-AllW
Many thanks Aljoscha! It can replay computing old instances now. The result
looks absolutely correct.
When printint currentTimestamp there are values such as 1456480762777,
1456480762778...which are not -1s.
So I'm a bit confused about extractTimestamp().
Can I ask why
curTimeStamp = currentTimes
An update. The following situation works as expected. The data arrives after
Flink job starts to execute.
1> (2016-02-25T17:46:25.00,13)
2> (2016-02-25T17:46:40.00,16)
3> (2016-02-25T17:46:50.00,11)
4> (2016-02-25T17:47:10.00,12)
But for the data arrives long time before. Strange behavior appears.
Thank you for your reply. Please let me know if other classes o full code is
needed.
/**
* Count how many total events
*/
StreamExecutionEnvironment env =
StreamExecutionEnvironment.createLocalEnvironment(4, env_config);
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
Thanks you. I can be sure this way is correct now.
I have tried this but the windows are not aggregating as well. Instead, the
AllWindowFunction only works as flatMap.
Shouldn't it only output for one window range? The most strange part is the
first output is aggregating while others are not.
1>
Thank you for your reply.
The following in the current master looks like not iterable? because the
parameter is IN rather than Iterable
So I still have problem to iterate,,,
@Public
public interface AllWindowFunction extends
Function, Serializable {
/**
* Evaluates the window an
Had the same problem as Javier's.
3450 [Thread-10] DEBUG
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Group
metadata response ClientResponse(receivedTimeMs=1455811593680,
disconnected=false, request=ClientRequest(expectResponse=true,
callback=org.apache.kafka.clients.consume
Hi,
I remember there is a web interface(port: 6XXX) that can change
configuration of Job Manager.
e.g. taskmanager.numberOfTaskSlots, taskmanager.heap.mb
But I can only find port 8081 that showing the configuration and I cannot
change them.
Did I miss anything?
Best,
Sendoh
--
View this me
After adding the dependency it totally works! Thank you a lot!
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Cannot-start-FlinkMiniCluster-WebServer-using-different-port-in-FlinkMiniCluster-tp4414p4455.html
Sent from the Apache Flink User M
The following message is obtained after putting BasicConfigurator.configure()
in main();
But I don't understand the reason `flink-runtime-web is not in the
classpath`.
For me the strange part is using the scala version works well whereas my
java version throws exception.
1413 [main] ERROR org.apa
Thanks for your reply.
Yea I'm not sure how to use WebMonitor. For me it's about to write the log
into a file in disk that should go to the job manager originally at
localhost:8081.
Could you please give an brief example how to use it?
Best,
Sendoh
--
View this message in context:
http://
Thanks for your suggestion. I have some questions to start WebRuntimeMonitor.
In startWebRuntimeMonitor what should be called for
- leaderRetrievalService: LeaderRetrievalService,
- actorSystem: ActorSystem ?
My ref:
(https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/
Yea I'm wondering why the web server cannot be instantiated because changing
the port 8081 to works well in the following demo sample of Flink.
https://github.com/dataArtisans/flink-streaming-demo/blob/master/src/main/scala/com/dataartisans/flink_demo/utils/DemoStreamEnvironment.scala
so is t
The original port is used so I'm changing the web port but it fails to. Can I
ask which part I made a mistake?
The error:
Exception in thread "main" java.lang.NullPointerException
at
org.apache.flink.runtime.minicluster.FlinkMiniCluster.startWebServer(FlinkMiniCluster.scala:295)
at
org.ap
Found the answer here
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Elasticsearch-connector-support-for-elasticsearch-2-0-td3910.html
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-plus-Elastic-Search-plus-
Hi,
Recently I read this post about Flink+Elastic Search+Kibana.
https://www.elastic.co/blog/building-real-time-dashboard-applications-with-apache-flink-elasticsearch-and-kibana
Can I ask why the Elastic Search version 1.7.3 is selected? What would be
the potential issues with the newer versions?
Hi,
What would be the difference between using global variable and broadcasting
it?
A toy example:
// Using global
{{...
private static int num = 10;
}
public class DivByTen implements FlatMapFunction,
Tuple1> {
@Override
public void flatMap(Tuple1value, Collector> out)
{
out.collect(n
Thank you. Your explanation helps me to understand more.
--
View this message in context:
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/IDE-indicates-the-data-type-error-of-my-Filter-operator-in-Scala-tp820p838.html
Sent from the Apache Flink (Incubator) User Mai
Got it. Problem solved by changing the "map"
val pldIndex = GraphUtils.readVertices(PLDIndexFile).map { vertex =>
AnnotatedVertex(vertex.annotation, vertex.id) }
--
View this message in context:
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/IDE-indicates-the-dat
Hi,
My Filter operator in Scala encounters that IDE complains about the data
type:
"overloaded method value filter with alternatives: (fun: ((String, Int)) ⇒
Boolean cannot be applied..."
Scala for me is quite new. I'm thinking the problem comes from the type
doesn't match each other in map opera
Thank you!This is complete solving the problem.
--
View this message in context:
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Error-in-reduceGroup-operator-when-changing-the-Flink-version-from-0-7-to-0-8-tp785p793.html
Sent from the Apache Flink (Incubator) Us
Thanks for your reply.
The error is from java compiler (Eclipse).
It looks like the data type of output and input are OK in 0.7 version, but
not proper in 0.8 version.
--
View this message in context:
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Error-in-reduce
Hi, when changing the version from 0.7 to 0.8, reduceGroup operator gets the
following error:
"The method reduceGroup(GroupReduceFunction) in the type
DataSet is not applicable for the arguments
(InDegreeDistribution.CountVertices)"
Tried to figure out the error but failed to fix it. Could you pl
Thank you for the information you provided.
Yes, it runs an iterative algorithm on a graph and feeds the result of one
iteration to the next.
The getting stuck issue disappears when increasing the maximal iterations in
the algorithm
ex. increase to 1000 vertex centric iterations in the algorithm,
Thank you for your reply.
The dataset:
The 1MB dataset is 38831 nodes and 99565 edges which doesn't get stuck.
The 30MB dataset is 1,134,890 nodes and 2,987,624 edges which gets stuck.
Our code works like the following logic:
do{
filteredGraph = graph.run(algorithm);
// Get sub-graph for next
Hi,
I have a question about generating the sub-graph using Spargel API.
We use filterOnVertices to generate it.
With 30MB edges, the code gets stuck at Join(Join at filterOnVertices)
With 2MB edges, the code doesn't have this issue.
Log
Hi,
Would it be available to control the supersteps in Flink Spargel?
For example, a master controls the basic graph algorithm having 5 phases and
the master can switch between the phases.
In the given example of Spargel those are send msg and update msg
sequentially.
Would it be possible to swit
Hi,
In graph api there's an single source shortest path library.
DataSet> singleSourceShortestPaths =
graph.run(new SingleSourceShortestPaths(srcVertexId,
maxIterations)).getVertices();
For Multiple Source, would it be possible to run it for all nodes using
for-loop?
for example,
34 matches
Mail list logo