take on a Python multi-lang library
> called Pyleus <https://github.com/Yelp/pyleus>, but as one of the main
> devs for streamparse, I prefer streamparse. :)
>
> Thanks,
> Dan
>
>
>
>
> On June 25, 2015 at 6:22:05 AM, Hemanth Yamijala (yhema...@gmail.com)
Hi,
We are using Storm 0.9.4 with python Shell bolts. A dependent internal
library was printing a message inadvertently. I understand this will
interfere with the multi-lang protocol and hence fail. But, instead of
failing noisily, the result was silent failures of some messages in the
topology wi
can use this
> number (the dop is limited to the number of distinct keys anyway). So
> just set this number high and you are fine (ie, it should not be
> difficult in practice).
>
> -Matthias
>
>
> On 06/22/2015 03:05 PM, Hemanth Yamijala wrote:
> > Hi,
> &g
an expanding on
need. Is this a valid use case for Storm to support ? Any plans for this in
future ?
Thanks
hemanth
On Mon, Jun 22, 2015 at 5:34 PM, John Yost
wrote:
> This is an excellent question, need this info for myself as well.
>
> --John
>
> On Mon, Jun 22, 2015 at
Hi,
I was testing the rebalance functionality on Storm 0.9.4.
storm rebalance -w 10 -n 2
- Works as expected. It increased the number of workers to 2.
storm rebalance -w 10 -n 2 -e =20
- Works only for increasing the number of workers, but did *not* change the
number of executors of .
I came
.
The patch is marked for Storm 0.10.0.. Wondering if this is planned some
time soon ?
Thanks
Hemanth
On Tue, Mar 10, 2015 at 7:05 AM, Jeremy Heiler
wrote:
>
>
> On Mon, Mar 9, 2015 at 12:38 PM, Hemanth Yamijala
> wrote:
>>
>> Looking at few other mail threads with simil
Hi,
We are using Storm 0.9.3. We have a topology running a Shell bolt that
launches a Python process. After running for 3-4 hours, we saw this
exception on one of the worker nodes:
2015-03-09T11:08:50.009-0500 b.s.util [ERROR] Async loop died!
java.lang.RuntimeException: java.lang.NullPointerExce
Frenkel wrote:
>
>> Use noneGrouping between the two bolts so the only overhead is a thread
>> context switch. Storm+Linux manages these context switches pretty
>> well. Unless you are already in the stage of CPU usage optimizations, I
>> would not sweat about it.
>>
ely, given the different scaling
requirements for processing and I/O bound bolts. Do you see this as a
concern ?
Thanks
hemanth
On Wed, Jan 7, 2015 at 9:39 PM, Jens-U. Mozdzen wrote:
> Hi Hemanth,
>
> Zitat von Hemanth Yamijala
>
>> Hi all,
>>
>> I guess it is common to
Hi all,
I guess it is common to build topologies where message processing in storm
results in data that should be stored in external stores like NoSQL DBs or
message queues like Kafka.
There are two broad approaches to handle this storage:
1) Inline the storage functionality with the processing
orwarding it to the real
> collector.
> --
> *From:* Hemanth Yamijala
> *Sent:* Monday, January 5, 2015 7:20 PM
> *To:* user@storm.apache.org
> *Subject:* Hooks into Shell bolt
>
> Hi,
>
> We are planning to integrate some Python module
Hi,
We are planning to integrate some Python modules with Storm. I have been
able to get the integration going quite easily using the ShellBolt and
python Storm module, following the example in storm-starter.
For productionizing, after basic processing in Python, there are some steps
that we woul
1
9) Kafka spout establishes connection to Bolt 1
10 System becomes functional.
Could someone explain what could be going on here ? Would be glad to
provide any additional info required (like logs etc).
Thanks
Hemanth
On Mon, Sep 29, 2014 at 3:57 PM, Hemanth Yamijala
wrote:
> Hi,
>
> We are
Hi,
We are using Apache-storm 0.9.2 and the storm-kafka
(version 0.9.0-wip16a-scala292) which has support for Kafka 0.7.
I am trying to understand the failure handling of Kafka spout in a
particular scenario.
I have 4 workers, 1 running 1 executor of the Kafka spout, 1 running 1
executor of Bolt
14 matches
Mail list logo