Take a look at Complex event processing and windowing
On Wed, Aug 17, 2016 at 5:23 PM, Pratyusha Rasamsetty <
pratyush...@raremile.com> wrote:
> Hi Tousif,
>
> Kafka is not present in our architecture as of now. We are not planning to
> add that complexity.
>
> Let'
me more tuples for further processing. I
> understand that we can use trident state for doing batch insert to
> elasticsearch. But based on the response I could not emit from trident
> state.
>
> Please help me solve this - "Batch insert and emit based on response using
> Trident."
>
>
> Thanks
> Pratyusha
>
--
Regards
Tousif Khazi
happening here. i did not face such
issue with 0.9.4.
--
Regards
Tousif Khazi
Thanks,
Will pull 0.9.5 and test.
On Thu, Jul 23, 2015 at 8:48 PM, Harsha wrote:
> Tousif,
>As per fix version https://issues.apache.org/jira/browse/STORM-130 it
> looks like its in 0.9.5
> Thanks,
> Harsha
>
>
> On Thu, Jul 23, 2015, at 06:33 AM, Tousif wrote
ain.invoke(worker.clj:500)
[storm-core-0.9.4.jar:0.9.4]
--
Regards
Tousif Khazi
Hello,
last email had bounced reposting again. I have a topology which has kafka
spout and multiple bolts now i want to do batch processing on same data
which bolts have processed.
is it possible to have both ? can anyone point me documentation or example
?
--
Regards
Tousif Khazi
Hello,
i have a topology which has kafka spout and multiple bolts now i want to do
batch processing on same data which bolts have processed.
is it possible to have both ? can anyone point me documentation or example
?
--
Regards
Tousif Khazi
ER.md.
>
> Hope this helps.
>
> Thanks.
> Jungtaek Lim (HeartSaVioR)
>
> 2015-06-18 18:14 GMT+09:00 Tousif :
>
>> Hello,
>>
>> I tried passing STORM_TEST_TIMEOUT_MS as env variable in eclipse and also
>> through System.setProperty but no luck. Any one got this
(TopologyTest.java:135)
at backtype.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:86)
at backtype.storm.Testing.withLocalCluster(Unknown Source)
On Thu, Jun 18, 2015 at 3:20 PM, swapnil joshi
wrote:
> Hi Tousif,
>
> add following line in you java code
>
Hello,
I tried passing STORM_TEST_TIMEOUT_MS as env variable in eclipse and also
through System.setProperty but no luck. Any one got this working ?
--
Regards
Tousif Khazi
Hi ,
Is there a way to share a resource file across all workers similar to hdfs.
That resource/config file will have to be updated run time. i'm not looking
at using hdfs for now.
--
Regards
Tousif Khazi
roblem from happening again.
>
> $KAFKA_HOME/bin/kafka-preferred-replica-election.sh --zookeeper
> {a_node_from_kafkas_zookeeper_cluster}
>
>
>
> From: Tousif
> Reply-To: "user@storm.apache.org"
> Date: 2015,Monday, May 4 at 02:27
> To: "user@storm.apa
Sorry for delayed response to your mail. I dont think IP address is
changing as i use private IP's.
On Fri, May 1, 2015 at 7:59 AM, Supun Kamburugamuva
wrote:
> Does your machines ips change after sometime?
>
> Thanks,
> Supun..
> On Apr 30, 2015 10:15 PM, "Tousif"
I get this error after few days of topology started running.
On 30 Apr 2015 22:18, "Supun Kamburugamuva" wrote:
> Are you getting this all the time?
>
> Thanks,
> Supun
> On Apr 30, 2015 5:52 AM, "Tousif" wrote:
>
>> I'm constantly getting this i
I'm constantly getting this issue. Any help ?
On Thu, Apr 23, 2015 at 12:15 PM, Tousif wrote:
> Hello,
>
> i see following error in one of the worker logs and message ack fails.
>
> b.s.m.n.Client [ERROR] discarding 1 messages because the Netty client to
> Netty-Client-
ion 2.
all kafka brokers and zk quorum were up and running.
Can anyone shed some light on what could be the reason ?
--
Regards
Tousif Khazi
/ipadress3:6703 is unavailable
2015-04-07T13:17:15.129+0530 b.s.m.n.Client [ERROR] dropping 2 message(s)
destined for Netty-Client-hostname/ipaddress3:6703
2015-04-07T13:17:15.131+0530 b.s.m.n.Client [INFO] connection established
to Netty-Client-hostname/ipaddress3:6703
--
Regards
Tousif
increasing nimbus.task.timeout.secs: 30 to higher value has solved the
problem
On Thu, Mar 19, 2015 at 1:17 PM, Tousif wrote:
> and here are config from storm.yaml
>
> supervisor.worker.start.timeout.secs: 300
> supervisor.worker.timeout.secs: 60
> nimbus.task.t
and here are config from storm.yaml
supervisor.worker.start.timeout.secs: 300
supervisor.worker.timeout.secs: 60
nimbus.task.timeout.secs: 30
storm.zookeeper.session.timeout: 6
storm.zookeeper.connection.timeout: 5
On Thu, Mar 19, 2015 at 11:59 AM, Tousif wrote:
> Hello,
>
&
socket connection and attempting reconnect
2015-03-19 00:40:47 o.a.z.ClientCnxn [INFO] Client session timed out, have
not heard from server in 28599ms for sessionid 0x34c2791efa40011, closing
socket connection and attempting reconnect
--
Regards
Tousif Khazi
"
> part in the spoutConfig. Whats the need to use randomUUID.
>
> --
> Harsha
>
>
> On March 9, 2015 at 11:09:46 PM, Tousif (tousif.pa...@gmail.com) wrote:
>
> Thanks Harsha,
>
> Does zkRoot in the spoutconfig is used along with random string to
>
d the same
> name as KafkaSpout uses topology name to store and retrieve the offsets
> from zookeeper.
>
> --
> Harsha
>
>
>
> On March 9, 2015 at 7:30:38 AM, Tousif (tousif.pa...@gmail.com) wrote:
>
> If your topology has saved Kafka offset in your zookeeper it will s
Harsha,
Its Distributed. what is the significance of
*kafka.api.OffsetRequest.LatestTime()
*wrt reading from where storm left last time.
On Mon, Mar 9, 2015 at 7:42 PM, Harsha wrote:
> Tousif,
>How did you deployed the topology . Is this a distributed storm
> c
Since local cluster has in process zoookeper so I tried it in a distributed
cluster. But was not able to get those messages
On Mon, Mar 9, 2015 at 3:27 PM, Tousif wrote:
> Hello,
>
> I'm trying to read message from kafka which were not processed when
> topology was offline and
Hello,
I'm trying to read message from kafka which were not processed when
topology was offline and restarted after a while.
I tried following config
SpoutConfig spoutConfig = new SpoutConfig(hosts,
PropertyManager.getProperty("kafka.spout.topic").toString(), "/" +
PropertyManager.getProperty("k
connection established to a
remote host Netty-Client-realtimeslave1.novalocal/10.0.0.14:6702, [id:
0x320fd4e4, /10.0.0.11:48658 => realtimeslave1.novalocal/10.0.0.14:6702]
On Tue, Feb 17, 2015 at 10:31 PM, Tousif wrote:
> Thanks,
> I will try out these config properties.
> On Feb 17, 2015 7:
Tue, Feb 17, 2015, at 06:03 AM, Tousif wrote:
>
> Hello,
>
> I have a bolt which uses a pool of large objects. When pool
> reinitialises(once in 4 hours) bolt waits for few seconds and disconnects
> with zookeper.
>
> I have specified following properties in yaml but still wo
] State change:
SUSPENDED
2015-02-17 04:35:34 b.s.cluster [WARN] Received event :disconnected::none:
with disconnected Zookeeper.
2015-02-17 04:35:34 o.a.c.f.s.ConnectionStateManager [WARN] There are no
ConnectionStateListeners registered.
--
Regards
Tousif Khazi
How can i avoid duplicate event processing when this situation arises. i
assuming all events which where processed by this supervisor node might be
replayed.
On Mon, Feb 2, 2015 at 8:22 PM, Harsha wrote:
> Tousif,
>You might be running into
> https://issues.apache.org/ji
:
> Clear your storm local directory and restart the supervisor.
>
> On Fri, Jan 30, 2015 at 6:03 PM, Tousif wrote:
>
>> one of the worker in storm is terminating with the following error.i'm
>> using zookeeper 3.4.6 Here is the log.
>>
>> o.a.c.f.s.ConnectionSt
.5.1.jar:na] at
backtype.storm.daemon.worker.main(Unknown Source)
[storm-core-0.9.2-incubating.jar:0.9.2-incubating] 2015-01-30 08:25:04
b.s.util [INFO] Halting process: ("Error on initialization")
--
Regards
Tousif Khazi
Auto rebalance is throwing error
On Tue, Jan 27, 2015 at 11:42 AM, Tousif wrote:
> Hi,
>
> i have a single node zookeeper and 2 node storm having 1 worker on each
> node.
> one of the worker stops and start on other node so at this point i ll be
> having two worker
/10.0.0.11:2181, sessionid =
0x14b168e7bfd000f, negotiated timeout = 2
2015-01-24 06:35:15 o.a.c.f.s.ConnectionStateManager [INFO] State change:
RECONNECTED
2015-01-24 06:35:15 o.a.c.f.s.ConnectionStateManager [WARN] There are no
ConnectionStateListeners registered.
--
Regards
Tousif Khazi
33 matches
Mail list logo