Hi Aditya,
Thanks for your interest. We entatively planning one in June
1st week. If you haven't already please register here
https://www.meetup.com/Apache-Storm-Apache-Kafka/ . I'll keep the Storm
lists updated once we finalize the date & location.
Thanks,
Harsha
Hi All,
We are organizing a Storm Meetup at Hortonworks HQ in Santa
Clara,CA. If you are interested in attending please RSVP here
https://www.meetup.com/Apache-Storm-Apache-Kafka/events/238975416/
Thanks,
Harsha
kafka
client APIs are stabilized and had critical bug-fixes.
Thanks,
Harsha
On Tue, Mar 21, 2017 at 6:45 PM Anis Nasir wrote:
> Dear all,
>
> Can anyone suggest me a *working version of Kafka* for Storm 1.0.2.
>
> Thanking you in advance.
>
> Regards,
> Anis
>
>
>
Hi All,
We are planning on scheduling a Storm Meetup in April 1st week.
Here is the meetup link https://www.meetup.com/Apache-Storm-Apache-Kafka/.
If you are interested in talking about your use-cases in storm there is 1
more slot available, please reach out to me.
Thanks,
Harsha
HI Davorin,
I recommend using hive 1.x or higher release. They had quite
few issues in hive streaming api and also we had issues on heartbeating
from storm hive side. These are all
fixed in storm 1.0 bolt and hive 1.0 had streaming api fixes as well.
Thanks,
Harsha
On Tue, Oct 25
Abhishek,
Are you looking rolling upgrade kafka cluster or storm?
Harsha
On Fri, Aug 26, 2016 at 6:18 AM Abhishek Agarwal
wrote:
>
> On Aug 26, 2016 2:50 PM, "Abhishek Agarwal" wrote:
>
> >
>
> > Here is an interesting use case - To upgrade a topology
Jira and patch avialable here
https://issues.apache.org/jira/browse/STORM-2041.
On Fri, Aug 12, 2016 at 8:05 AM Harsha Chintalapani wrote:
> sorry for the multiple emails. Something went wrong on my email provider
> side.
>
> Instead of splitting into two separate threads as its har
sorry for the multiple emails. Something went wrong on my email provider
side.
Instead of splitting into two separate threads as its hard keep track of
the discussion. We can continue this discussion on users list as it will
keep the storm users in the discussion as well.
Thanks,
Harsha
On Fri
test mail ignore.
Hi All,
Dropping java 7 support on master will allow us to use the new api
in Java 8 and since the master is being used for java migration
its good to make the decision now. Let me know your thoughts.
Thanks,
Harsha
unknownhost generally means your configuration in nimbus.seeds are unable
to resolve to a hostname. Make sure the entries you added in nimbus.seeds
can be pingable from the host you are submitting the topology
On Mon, Jul 4, 2016 at 2:27 PM Walid Aljoby wrote:
> Hi all,
>
> I am running Apache S
ppens through distuptor queue.
If one needs to increase the size of buffers for netty take a look at netty
configs in storm.yaml. We recommend to go with the defaults.
Thanks,
Harsha
On Mon, Jul 4, 2016 at 9:59 AM Nathan Leung wrote:
> Double check how you are pushing data into Kafka. You are
to unsubscribe send an email to user-subscr...@storm.apache.org and if
you are on dev list
dev-subscr...@storm.apache.org
-Harsha
On Tue, Jun 28, 2016, at 06:16 AM, Tim McClure wrote:
> I have unsubscribed every way possible – please take me off this list.
>
> Tim
>
>
>
Did you try setting your topology package name as another logger
https://github.com/apache/storm/blob/master/log4j2/worker.xml#L80 and
you can control the level and other details in there.
-Harsha
On Sun, Jun 5, 2016, at 01:21 PM, anshu shukla wrote:
> +1 any update ??
>
> On S
HI Stephen,
Can you try setting ui.header.buffer.bytes to higher value in
storm.yaml.
-Harsha
On Thu, May 5, 2016, at 10:08 AM, Stephen Powis wrote:
> Hey!
> We've started getting this error frequently when trying to view our
> topology details via the webUI. Does anyone have a
Jungtaek,
I think filters that can support a regex gives more felxibility.
Thanks,
Harsha
On Mon, May 2, 2016, at 07:48 PM, Jungtaek Lim wrote:
> Kevin,
>
> For specific task, you can register your own metrics which resides
> per task.
> But metrics doc on Storm is not kind enou
Jungtaek,
Probably a filter config to whitelist and blacklist certain metrics. So
that it will scale if there are too many workers and users can turn off
certain metrics.
Thanks,
Harsha
On Mon, May 2, 2016, at 06:19 AM, Stephen Powis wrote:
> Oooh I'd love this as well! I really
ion. As I said above Kafka
0.9.0.1 contains two kafka apis new ones which will only works with
0.9.0.1 kafka cluster and old consumer apis which can work 0.8.2.
Even though you compile with 0.9.0.1 version it will work with
0.8.2.1 kafka cluster.
Let me know if you have any questions.
Thanks,
Ha
ion. As I said above Kafka
0.9.0.1 contains two kafka apis new ones which will only works with
0.9.0.1 kafka cluster and old consumer apis which can work 0.8.2.
Even though you compile with 0.9.0.1 version it will work with
0.8.2.1 kafka cluster.
Let me know if you have any questions.
Thanks,
Har
Did you try using kinit with the keytab . Make sure its the same unix
user who is running storm UI
On Fri, Mar 18, 2016, at 02:58 PM, Andrey Dudin wrote:
> Hi guys.
>
> I try to configure Kerberos for Storm.
> I use storm 0.10.
> Now I try configure only UI, without other components.
>
>
> I ad
into your topology.
>>
>> 2016-03-14 15:15 GMT+08:00 vibha goyal :
>>> Thanks, but as I mentioned in the original e-mail, I cannot keep
>>> storm.yaml in /home/vgoyal5/apache-storm-1.0.0-
>>> SNAPSHOT/conf/storm.yaml" .
>>>
>>> If i
its for Nimbus HA
http://hortonworks.com/blog/fault-tolerant-nimbus-in-apache-storm/
On Sun, Mar 13, 2016, at 11:27 PM, Sai Dilip Reddy Kiralam wrote:
>
> Hi Harsha,
> can you explain why nimbus seeds are used - [nimbus.seeds: ["host1", "host2",
> "host3
ample
make sure you copy the same storm.yaml in all the nodes in the
storm cluster.
-Harsha
On Sun, Mar 13, 2016, at 05:07 PM, Xiang Wang wrote:
> Hi,
>
> I guess you are in the wrong directory. Do "mvn package" under
> "$STORM_HOME/examples/storm-
> starter/&
Rajashekar, Current storm kafka connector uses kafka's
simpleconsumer api. Only kafka's new consumer api has the security
enabled. There is work being done to port kafka connector to use new
consumer api.
Thanks, Harsha
On Thu, Jan 28, 2016, at 02:04 PM, Rajasekhar wro
buses to avoid this issue. This feature is
will be part of upcoming 1.0 release.
Thanks, Harsha
On Fri, Jan 8, 2016, at 02:20 PM, Ganesh Chandrasekaran wrote:
> When Nimbus went down, other topologies were still processing messages
> correctly. It’s only because when 1 half of my topology w
cipal@domain
-Harsha
On Sat, Dec 26, 2015, at 04:46 AM, Raja.Aravapalli wrote:
>
> Hi
>
> I am getting below exception when i am trying to write tuples into
> HDFS which is in a secured Hadoop cluster. Can someone pls share your
> thoughts and help me fix the issue
>
> java
you need to package hdfs-site.xml and core-site.xml from your hadoop
cluster as part of your topology jar.
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_storm-user-guide/content/storm-connectors-secure.html
-Harsha
On Thu, Dec 24, 2015, at 08:48 PM, Raja.Aravapalli wrote:
>
>
Mark, Here is the JIRA
https://issues.apache.org/jira/browse/STORM-650 to make the kafak
connector to use 0.9 . Its in the works. Thanks, Harsha
On Tue, Dec 8, 2015, at 03:25 AM, Davis, Mark (TS R&D Galway) wrote:
> Hi,
>
> Per Michael Noll’s article:
http://www.mic
Florin, You already have 2 & 3 working including doAs in
secure mode. For 1 as Bobby pointed out we removed it due to
security issue. -Harsha
On Thu, Nov 26, 2015, at 07:47 AM, Spico Florin wrote:
> Hello! I would like to ask you what is the status of the REST API in
> 0.10.x ve
What are then numbers you are seeing. I believe there needs to be some
optimization needs to be done on phoenix.
On Fri, Nov 20, 2015, at 09:15 AM, Youzha wrote:
> i try to run trident topology with hbase writer and phoenix. but i
> feels the data upsert to phoenix seems too slow. is there any
>
Do you have any calls to external data sources which might be increasing
the latency and causing tuple timeout?
On Sun, Nov 1, 2015, at 04:49 AM, Renjie Liu wrote:
> Yes, I've set it to 2
>
> On Sun, Nov 1, 2015 at 6:40 PM, Santosh Pingale
> wrote:
>> Have you set 'topology.*max*.*spout*.*pe
Its supervisor config not nimbus.
https://github.com/apache/storm/blob/master/storm-core/src/clj/backtype/storm/daemon/supervisor.clj#L146
-Harsha
On Wed, Oct 28, 2015, at 08:08 AM, Dillian Murphey wrote:
> Is this a nimbus only config, or can my other supervsior codes have
> this opt
Dmitry, Kafka's new consumer api is not yet released (Kafka 0.9
release still pending). We don't have any specific date yet but will try
it include it in 0.11 version. Thanks, Harsha
On Mon, Oct 26, 2015, at 08:39 AM, Dmitry Sergeev wrote:
> Hello, Storm team!
>
>
ng. Apart from that
depending on which version you are using forceFromStart or
ignoreZkOffsets set to false.
-Harsha
On Sun, Oct 25, 2015, at 06:18 AM, Craig Charleton wrote:
> Keep in mind that Zookeeper stores the Kafka offsets as they relate to
> the consumer group, no
whats your Kafka Spout parallelism and how many partitions you've in
your kafka topic. Also did you try to tune
topology.max.spout.pending -Harsha
On Wed, Oct 7, 2015, at 10:54 AM, Rohit Kelkar wrote:
> I have a kafka spout and single bolt topology running on a cluster in
> debug
. are you seeing anything under failed column in Stom UI.
7. any errors in storm topology logs.
Thanks,
Harsha
On Sat, Jul 25, 2015, at 05:29 AM, Dimitris Sarlis wrote:
> Hi all,
>
> I'm trying to run a topology in Storm and I am facing some scalability
> issues. Specifically, I
uce the
> parallellismHint to minimum to avoid the problem caused by
> https://issues.apache.org/jira/browse/STORM-503
>
> note that we use trident, so if I count all bolts I can see under the
> section "Bolt (All time)" in storm UI, I have 137 bolts (incl
Tousif, As per fix version
https://issues.apache.org/jira/browse/STORM-130 it looks like its in
0.9.5 Thanks, Harsha
On Thu, Jul 23, 2015, at 06:33 AM, Tousif wrote:
>
> 2015-07-23T17:06:06.908+0530 b.s.d.worker [ERROR] Error on
> initialization of server
d by a maxTask too high
>
>
>
> *De :* Harsha *Envoyé :* 22 juillet 2015 10:56 *À :*
> user@storm.apache.org *Objet :* Re: worker dies after view minutes
>
> how is your topology code looks like are you throwing any errors from
> bolt's execute method?. It does lo
with instead of throwing it back to
worker jvm
-Harsha
On Wed, Jul 22, 2015, at 07:43 AM, Eric Ruel wrote:
> Hello
>
> the workers in my topology dies after 1,2 minutes
>
> I tried to change the config about the heartbeat, cluster or local
> mode, but they always di
By default it runs on 8080 if not you can look at storm.yaml for configured
“ui.port"
--
Harsha
On July 21, 2015 at 7:01:02 AM, Vamsikrishna Vinjam
(vamsikrishna.vin...@inndata.in) wrote:
storm web port number :
im trying with 8772 but it is not working
soumi, if your downstream bolt doesn't ack before tuple timeout (
by default its 30 secs) storm will consider it as failed tuple and
kafka spout will replay those. Since your last bolt is slower in acking
may be you shouldn't anchor the tuple to the last bolt .
-harsha On Wed, Ju
Storm does support multi-node setup in windows. Our customers using it
in multi-node setup . We haven't tested security features that recently
released in 0.10 but non-secure setup will work. -Harsha
On Wed, Jul 15, 2015, at 06:09 AM, Bobby Evans wrote:
> Storm does support multi-node on
Hi Adrianos, We've tested before with current storm-kafka
spout and kafka 0.8.2.1 it works without any issues. Storm-kafka uses
simple consumer API of kafka which isn't changed in kafka 0.8.2.1. It
works fine for us. Thanks, Harsha
On Sat, Jun 27, 2015, at 12:03 PM, Adri
If you are interested in attending bay area meetup for Apache Storm & Kafka .
Please join the group here
http://www.meetup.com/Apache-Storm-Apache-Kafka/
Thanks,
Harsha
Hi Tim,
Overall looks good to me. we recommend adding supervisord ( a watchdog
process) to monitor nimbus and supervisors and restart them incase if they fail
but since you already upstart which does the same job.
Thanks,
Harsha
On June 18, 2015 at 1:15:16 PM, Tim Molter (tim.mol
Which version of storm are you using?
--
Harsha
On June 2, 2015 at 8:04:48 AM, Grant Overby (groverby) (grove...@cisco.com)
wrote:
Same here. The worker isn’t committing suicide; its being murdered by the
supervisor.
Grant Overby
Software Engineer
Cisco.com
grove...@cisco.com
Mobile: 865
Hi Sergio,
I would recommend storm-kafka part of apache storm external. Its being
actively maintained by the storm community.
There is good documentation in the README file about the config options. Do let
us know if its hard to configure and use.
Thanks,
Harsha
On May 19, 2015 at 7
Are you using separate zk clusters for storm and kafka. If so which zookeepers
did you configure for kafka spout.
--
Harsha
Sent with Airmail
On May 12, 2015 at 8:40:26 PM, rajesh_kall...@dellteam.com
(rajesh_kall...@dellteam.com) wrote:
Dell - Internal Use - Confidential
Strom Kafka
kind of topology you are
using and what are the spouts you are using.
Thanks,
Harsha
On May 12, 2015 at 4:22:35 PM, 임정택 (kabh...@gmail.com) wrote:
Hi!
First of all, you want to compare Spark streaming and Storm Trident, not Storm
Spout-Bolt topology. It's not same.
Generally batching makes
hi, you nimbus.host is listening on localhost nimbus.host: "127.0.0.1" .
Storm UI makes calls to nimbus to get storm cluster and topology info.
Make sure your nimbus and nimbus thrift port is reachable from storm ui
host. -Harsha
On Sun, May 3, 2015, at 03:19 PM, Chun Yuen Lim wr
I haven’t added two-way SSL in the current PR. I can add that as part of this
PR.
--
Harsha
On April 1, 2015 at 10:36:44 AM, Mike Thomsen (mikerthom...@gmail.com) wrote:
That's what I was afraid of. Any idea when that PR is going to be merged? Also,
will it support two-way SSL? Some o
Hi Mike, Current release versions don't have a support for HTTPS for UI.
I have a PR open here https://github.com/apache/storm/pull/479 to add
that support. -Harsha
On Wed, Apr 1, 2015, at 09:49 AM, Mike Thomsen wrote:
> Is it possible to have Storm UI run over HTTPS? I've done a de
fetchSizeBytes is equivalent kafka consumer’s fetch.message.max.bytes . This
should be at a minimum equal to server’s max.message.bytes or more. In your
case it should be at least 10MB. Also you need to increase buffersizebytes as
well.
--
Harsha
On March 26, 2015 at 8:31:39 AM, François
does it do any external
calls to other systems.
Thanks,
Harsha
On March 25, 2015 at 12:45:01 PM, Espen Fjellvær Olsen (es...@mrfjo.org) wrote:
Hi,
We are only just starting to play with Storm, and are a bit baffled by
all the knobs and leavers that can be tweaked for more or less
througput
machine
to be able to use this.
--
Harsha
Sent with Airmail
On March 25, 2015 at 9:55:24 AM, bigdata hadoop (hadoopst...@gmail.com) wrote:
Hi All
I installed kerberized ambari 2.0 with storm service and had the topology
running. However I cant access storm UI and giving me the following error
There is a JIRA open on this feature
https://issues.apache.org/jira/browse/STORM-167 .
--
Harsha
On March 23, 2015 at 9:28:37 PM, Andrew Xor (andreas.gramme...@gmail.com) wrote:
I think that's the only way of actually updating the code, since besides
rebalancing Storm does not
It looks like your approach is right. Once you turn off forceFromStart and set
the offset time to earliestTime only new events from kafka topic will be read.
Are you sure that your kafka topic has new data coming in?
--
Harsha
On March 23, 2015 at 12:48:04 PM, François Méthot (fmetho
local mode is more for development debugging mode. It has in-process zookeeper
I am not sure how well that can handle few hundred meg per minute.
--
Harsha
On March 18, 2015 at 8:30:00 PM, clay teahouse (clayteaho...@gmail.com) wrote:
Hi All,
What could be the reasons for a topology hanging
Check your ulimit and increase if its too low and see this if happens again.
--
Harsha
On March 18, 2015 at 8:16:38 AM, hjh (apply...@163.com) wrote:
Check your ulimit and increase if its too low and see this if happens again.
--
Harsha
://github.com/apache/storm/blob/master/external/storm-hive/README.md
--
Harsha
On March 17, 2015 at 9:39:44 PM, Sunit Swain (sunitsw...@gmail.com) wrote:
I am using storm 0.9.3 and trying to make use of the HiveBolt to stream the
data directly into hive tables.
I am following this example:
https
Hi Srividhya,
Yes your understanding is right. Single topology worker is dedicated
to a topology so if you have 4 worker slots and if you want to allocate 2
workers per topology than you can only deploy two topologies on that cluster.
-Harsha
On March 17, 2015 at 6:33:23 PM
Hi Pranesh,
Can you share your spoutconfig for kafka spout.
-Harsha
On March 12, 2015 at 11:58:29 AM, Pranesh Radhakrishnan
(praneshscri...@gmail.com) wrote:
I am new to Kafka and Storm. I have done simple program to post some messages
to Kafka broker and able to read the messages
Yes thats bad approach . Mostly users keep a static string for the “id” part in
the spoutConfig. Whats the need to use randomUUID.
--
Harsha
On March 9, 2015 at 11:09:46 PM, Tousif (tousif.pa...@gmail.com) wrote:
Thanks Harsha,
Does zkRoot in the spoutconfig is used along with random string
eper. Having all of them on
the same machines is risky and performance will suffer.
-Harsha
On March 8, 2015 at 11:26:05 PM, Adaryl Bob Wakefield, MBA
(adaryl.wakefi...@hotmail.com) wrote:
Let’s say you put together a real time streaming solution using Storm, Kafka,
and the necessary Zookeeper
same name
as KafkaSpout uses topology name to store and retrieve the offsets from
zookeeper.
--
Harsha
On March 9, 2015 at 7:30:38 AM, Tousif (tousif.pa...@gmail.com) wrote:
If your topology has saved Kafka offset in your zookeeper it will start
processing from that otherwise It c
with
spoutConfig.froceFromStart=true for the first time if you want to read
from the beginning of the queue. For the subsequent times when you
redeploy the topology make sure you set spoutConfig.forceFromStart=false
so that your topology picks up the kafka offset from zookeeper and
starts where its left off.
-Harsha O
hat gets
read in Utils class.
> This is merged with storm.conf file. Since the storm.conf file does
> not have this property, the default is used.(hardcoded in
> default.yaml)
>
> Isn’t this a bug?
>
>
> Thanks,
> Srividhya
>
> *From:* Harsha [mai
Are you settting numWorkers in you topology config like here
https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/WordCountTopology.java#L92
On Thu, Feb 26, 2015, at 12:40 PM, Srividhya Shanmugam wrote:
> Thanks for the reply Harsha. We have distribu
Srividhya, Storm topologies requires at least one worker to be available
to run. Hence the config will set as 1 for the topology.workers as
default value. Can you explain in more detail what you are trying to
achieve. Thanks, Harsha
On Thu, Feb 26, 2015, at 12:12 PM, Srividhya Shanmugam wrote
Martin, Can't find anything wrong in the logs or in your topologyBuilder
code. In your bolts code how are you doing the acking of the tuples.
You've maxSpout pending set to 2k tuples do you see any where in your
bolt code can be hanging before acking the tuple?.
-Harsha
On Wed, Feb 25
My bad was looking at another supervisor.log. There are no errors in
supervisor and worker logs.
-Harsha
On Wed, Feb 25, 2015, at 08:29 AM, Martin Illecker wrote:
> Hi Harsha,
>
> I'm using three c3.4xlarge EC2 instances: 1) Nimbus, WebUI, Zookeeper,
> Supervisor 2) Zookee
-a4a1-396096b37509\heartbeats\1417082031858'
you might be running into
https://issues.apache.org/jira/browse/STORM-682 Is your zookeeper
cluster on a different set of nodes and can you check you are able to
connect to it without any issues -Harsha
On Wed, Feb 25, 2015, at 03:49 AM, Martin Illecker w
You might be loosing zookeeper connection. Try increasing these two
values storm.zookeeper.session.timeout: 2
storm.zookeeper.connection.timeout: 15000
On Tue, Feb 17, 2015, at 06:03 AM, Tousif wrote:
> Hello, I have a bolt which uses a pool of large objects. When pool
> reinitialises(once
Vineet, How are you looking at number of events in kafka. Did you
checked storm worker logs for any errors and what you mean by "the
acknowledgement of 190 million events in storm" are you looking at
number of acked messages? -Harsha
On Sun, Feb 15, 2015, at 04:40 AM, Vineet Mishra w
Are you running nimbus, supervisors in background? looks like you are
sshing into machines and running ./bin/storm nimbus in foreground which
will get killed when you exit the ssh session. Make sure you use
supervisord http://supervisord.org/ to run nimbus, supervisors.
On Sat, Feb 7, 2015, at 1
Hi Clay, I don't think there is a JIRA open for this. Can you please
open one and include steps to reproduce. Thanks, Harsha
On Sat, Feb 7, 2015, at 04:13 AM, clay teahouse wrote:
> Hi All,
>
> I emit my tuples in batches. Do I need to put the emit in a
> synchronized block
" My fetch and buffer are set to a couple of hundred meg and the max
spout pending is 1024" your fetch.size probably too large as it trying
to fetch 200mb of data at a time and your topic might not have
sufficient data.
On Fri, Feb 6, 2015, at 06:03 AM, clay teahouse wrote:
> Hi all,
>
> My kafk
" section in storm UI. Do check the
logs if your supervisors might be missing connection to zookeeper or
crashing! .
Which version of storm you are using. It might help if you can attach
screenshots for storm UI. Thanks, Harsha
On Thu, Feb 5, 2015, at 11:05 AM, David Shepherd wrote:
> I hav
LocalCluster should be used for debugging a topology . There is another
constructor you can use
LocalCluster cluster = new LocalCluster("localhost", new Long(2182));
first param is zookeeper host and second is the port.
-Harsha
On Tue, Feb 3, 2015, at 07:48 PM, Shivendra Singh wrote:
. Its better to use a random
UUID to distribute among all of your partitions. -Harsha
On Tue, Feb 3, 2015, at 12:44 AM, Vineet Mishra wrote:
> Do you mean to say that the event published to Kafka is not partition
> distributed?
>
> Well while creating the topic I ensured to create # of p
How are you calling commits on your DB. Did you tested it multiple times
and data drop always 3.7%? . Any chance that your bolt is written
successfully to db but it didn't call commit and your probably are not
seeing the data. -Harsha
On Mon, Feb 2, 2015, at 03:43 PM, Sa Li wrote:
>
spout with parallelism set to 10. Also make sure on the
producer side you are pushing data onto all of the 10 partitions so that
your kafka spout is fetching data from all of the 10 partitions. -Harsha
On Mon, Feb 2, 2015, at 08:55 AM, Vineet Mishra wrote:
> Hi Harsha,
>
> I
Tousif, You might be running into
https://issues.apache.org/jira/browse/STORM-130 . -Harsha
On Mon, Feb 2, 2015, at 12:28 AM, Tousif wrote:
> Thank you, Can you tell me why it might happen? i recently tried zk
> 3.3.3 with 0.9.2 and found incompatible than moved back again to 3.4.6
>
Vineet, Which kafka spout are you using? -Harsha
On Mon, Feb 2, 2015, at 05:25 AM, Vineet Mishra wrote:
> Hi,
>
> I am running Kafka Storm Engine to process real time data generated on
> a 3 node distributed cluster.
>
> Currently I have set 10 Executors for Storm Spout, w
erring too. It might be
that its getting submitted to a LocalCluster and than getting killed
after 5 secs. You can increase the time in the code. -Harsha
On Fri, Jan 30, 2015, at 09:27 AM, Webb, Ryan L. wrote:
> Hey Guys,
>
> I am attempting to use the testing utility to do some more
Hmm.. this is strange. It looks like supervisor unable to find "kill"
command. Can you check if its in path run "which kill" . -Harsha
On Wed, Jan 28, 2015, at 11:08 AM, Faisal Waris wrote:
>
> Hello,
>
> I have single node cluster with the default config. Th
Milad, Can you share your kafkaSpout config. -Harsha
On Mon, Jan 26, 2015, at 01:31 PM, Milad Fatenejad wrote:
> Hello:
>
> I reran my test with a replication factor of 2 but encountered the
> same issue...any other suggestions?
>
> Thanks Milad
>
> On Mon, Jan 26
Denis,
I suggest its better to have your http requests going to kafka
and than use Storm's KafkaSpout to process. This allow you to
not loose any events as KafkaSpout can do replays of the message
incase if there is a failure in your topology.
-Harsha
On Mon
Kushan, My question was about this "B1 and B2 are the same bolt but
running on 2 separate tasks.". Are they both same code i.e updating
cassandra table?. If so don't you need to do fieldsGrouping on B1
too? -Harsha
On Tue, Jan 20, 2015, at 05:35 PM, Kushan Maskey wrote:
> Bolt
t;
>> LMK if that is sufficient. Thanks.
>>
>>
>> --
>> Kushan Maskey
>>
>> On Tue, Jan 20, 2015 at 3:45 PM, Nathan Leung
>> wrote:
>>> Actually I thought about it and you should not have to do
>>> fieldsGrouping on both X and Y; on
Kushan, Thats strange if you are using fieldsGrouping than this
shouldn't be a problem as there is one instance of your bolt updating
one (x,y) values. It probably helps if you can paste your
topologybuilder part of the code. -Harsha
On Tue, Jan 20, 2015, at 01:11 PM, Kushan Maskey wrote:
Armando, slots means worker slots. In this case it looks like you
assigned 3 workers to your topology. -Harsha
On Mon, Jan 19, 2015, at 09:38 AM, Armando Martinez Briones wrote:
>
> Hi.
>
> I'm rebalancing a topology, on the log of nimbus I can see the line:
>
> b.s.d.ni
Are you trying to increase the parallelism of a bolt in a running
topology. If so you can use storm rebalance command , run "bin/storm
help rebalance" for more info.
On Thu, Jan 15, 2015, at 03:14 PM, Armando Martinez Briones wrote:
> Thanks Kosala
>
> Hi.
>
> I have a completed system with 3 to
Are you passing different zkroot for each of those topologies in
SpoutConfig.
On Mon, Jan 5, 2015, at 01:59 AM, Miroslav Holubec wrote:
> Hi James, I have same issue, have u solved it somehow?
>
> Regards, Miroslav
>
> On 2 July 2014 at 09:46, jamesw...@yahoo.com.tw
> wrote:
>> __
>> Hi all,
>>
you might be hitting this
https://issues.apache.org/jira/browse/STORM-598 do you have free worker
slots available for the new topology. -Harsha
On Sat, Jan 3, 2015, at 12:09 PM, Itai Frenkel wrote:
> Anything in the worker log files?
>
> *From:* Kushan Maskey *Sent:*
> Frida
custom spouts or bolts, in your use case
it is recommended to use kafka. Where you write your log files to kafka
topic and use storm's KafkaSpout to read the topic and send the data
onto downstream bolts.
-Harsha
On Sat, Dec 27, 2014, at 04:55 AM, Vineet Mishra wrote:
> Hi,
>
> I am l
It does read from the stored offsets. For the first time when you deploy
the topology and if you intend to read from the beginning of the topic
than set forceFromStart=true. If you kill and redeploy the topology and
you want to read from last saved position than make sure you set
forceFromStart=fa
Xioyong, It looks like a bug. Please file a JIRA here
https://issues.apache.org/jira/secure/Dashboard.jspa .Use "create"
button. make sure you select "Apache Storm" as project. example
https://issues.apache.org/jira/browse/STORM-187
-Harsha
On Wed, Dec 24, 2014, at 11:04 PM
-csrf-token:aB5nEmd7TsQOeluQpRXqKo6rLfFDw3h+L4RwKGe7zVbhzMV9tJeX3bHu+Sh0vLa+vkbo71Rq2VoXfj4c'
http://localhost:8080/api/v1/topology/wordcount-1-1419399960/deactivate
The second curl request will succeed and will give you a 302 which is a
bug on the UI rest api part but above request will work.
-Harsha
On T
1 - 100 of 159 matches
Mail list logo