No, we are not. I searched once more. One more thing I noticed was, before
submitting a topology I changed the log level to debug in the cluster.xml and
started the supervisor.
I see this in the log file topology.workers" 1.
b.s.d.supervisor - Starting Supervisor with conf {"dev.zookeeper.path"
"/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" 5,
"topology.builtin.metrics.bucket.size.secs" 60,
"topology.fall.back.on.java.serialization" true,
"topology.max.error.report.per.interval" 5, "zmq.linger.millis" 5000,
"topology.skip.missing.kryo.registrations" false,
"storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
"storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "random.thing"
"1.0", "topology.trident.batch.emit.interval.millis" 500,
"storm.messaging.netty.flush.check.interval.ms" 10, "nimbus.monitor.freq.secs"
10, "logviewer.childopts" "-Xmx128m", "java.library.path"
"/usr/local/lib:/opt/local/lib:/usr/lib", "topology.executor.send.buffer.size"
16384, "storm.local.dir" "/data/dmip/storm",
"storm.messaging.netty.buffer_size" 5242880,
"supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts"
true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs"
3600, "drpc.worker.threads" 64, "storm.meta.serialization.delegate"
"backtype.storm.serialization.DefaultSerializationDelegate",
"topology.worker.shared.thread.pool.size" 4, "nimbus.host" "localhost",
"storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2181,
"transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
16384, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm",
"storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true,
"storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers"
["localhost"], "transactional.zookeeper.root" "/transactional",
"topology.acker.executors" nil, "topology.transfer.buffer.size" 1024,
"topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=1%ID% ",
"supervisor.heartbeat.frequency.secs" 5,
"topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
"supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
"topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
"topology.tasks" nil, "storm.messaging.netty.max_retries" 100,
"topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy",
"nimbus.thrift.max_buffer_size" 1048576, "topology.max.spout.pending" nil,
"storm.zookeeper.retry.interval" 1000,
"storm.messaging.netty.client_worker.threads" 1,
"topology.sleep.spout.wait.strategy.time.ms" 10, "nimbus.topology.validator"
"backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
[6700 6701 6702 6703], "topology.environment" nil, "topology.debug" false,
"nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60,
"topology.message.timeout.secs" 60, "task.refresh.poll.secs" 10,
"topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port"
6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1,
"storm.messaging.netty.min_wait.ms" 100, "topology.tuple.serializer"
"backtype.storm.serialization.types.ListDelegateSerializer",
"topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy",
"topology.multilang.serializer" "backtype.storm.multilang.JsonSerializer",
"nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000,
"topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory",
"drpc.invocations.port" 3773, "logviewer.port" 8042, "zmq.threads" 1,
"storm.zookeeper.retry.times" 5, "topology.worker.receiver.thread.count" 1,
"storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin",
"topology.state.synchronization.timeout.secs" 60,
"supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600,
"storm.messaging.transport" "backtype.storm.messaging.netty.Context",
"storm.messaging.netty.server_worker.threads" 1, "logviewer.appender.name"
"A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs"
600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts"
"-Xmx1024m", "storm.cluster.mode" "distributed",
"topology.max.task.parallelism" nil,
"storm.messaging.netty.transfer.batch.size" 262144, "topology.classpath" nil}
From: Harsha [mailto:[email protected]]
Sent: Thursday, February 26, 2015 3:44 PM
To: [email protected]
Subject: Re: Why is the toplogy.workers is hardcoded to 1
Are you settting numWorkers in you topology config like here
https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/WordCountTopology.java#L92
On Thu, Feb 26, 2015, at 12:40 PM, Srividhya Shanmugam wrote:
Thanks for the reply Harsha. We have distributed supervisor nodes (2) and a
nimbus node. The storm.yaml file has topology.workers property commented out.
When a topology gets submitted that has one spout and a bolt with parallelism
hint of 10 for each, before 0.9.3 upgrade storm distributes this work across
multiple worker process. The supervisor slots configured in the three nodes has
a value 6701, 6702, 6703.
When such topology is submitted in storm now(after the upgrade), it’s just one
worker process that gets created with 21 executor threads. Shouldn’t storm
distribute the work?
From: Harsha [mailto:[email protected]]
Sent: Thursday, February 26, 2015 3:33 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: Why is the toplogy.workers is hardcoded to 1
Srividhya,
Storm topologies requires at least one worker to be available to run.
Hence the config will set as 1 for the topology.workers as default value. Can
you explain in more detail what you are trying to achieve.
Thanks,
Harsha
On Thu, Feb 26, 2015, at 12:12 PM, Srividhya Shanmugam wrote:
I have commented this property in the storm.yaml. But still it always defaults
to 1 after we upgraded storm to 0.9.3. Any idea why its hardcoded?
This email and any files transmitted with it are confidential, proprietary and
intended solely for the individual or entity to whom they are addressed. If you
have received this email in error please delete it immediately.
This email and any files transmitted with it are confidential, proprietary and
intended solely for the individual or entity to whom they are addressed. If you
have received this email in error please delete it immediately.
This email and any files transmitted with it are confidential, proprietary and
intended solely for the individual or entity to whom they are addressed. If you
have received this email in error please delete it immediately.