that would also limit my user to 1 at a time.
Do we not expect the spark streaming to take queries/filters from outside
world. Does output in spark streaming only means outputting to an external
source which could then be queried.
Thanks,
Archit Thakur.
As such we do not open any files by ourselves. EventLoggingListener opens
the file to write down the events in json format for history server. But it
uses the same writer(PrintWriter object) and eventually the same output
stream (which boils down to DFSOutputStream for us). It
seems DFSOutputStream
uch of the
performance impact by removing that check? Please correct me.
Thanks & Regards,
Archit Thakur.
all information present in an
executor tabs for running executors.
Thanks,
Archit Thakur.
On Mon, Apr 20, 2015 at 1:31 PM, twinkle sachdeva <
twinkle.sachd...@gmail.com> wrote:
> Hi Archit,
>
> What is your use case and what kind of metrics are you planning to add?
>
> Tha
-- Forwarded message --
From: Archit Thakur
Date: Fri, Apr 17, 2015 at 4:07 PM
Subject: Addition of new Metrics for killed executors.
To: u...@spark.incubator.apache.org, u...@spark.apache.org,
d...@spark.incubator.apache.org
Hi,
We are planning to add new Metrics in Spark for
Hi,
We are planning to add new Metrics in Spark for the executors that got
killed during the execution. Was just curious, why this info is not already
present. Is there some reason for not adding it.?
Any ideas around are welcome.
Thanks and Regards,
Archit Thakur.
compression. Is there a way I can append the new batch (uncached) to the
older(cached) batch without losing the older data from cache and caching
the whole dataset.
Thanks and Regards,
Archit Thakur.
Sr Software Developer,
Guavus, Inc.
including u...@spark.apache.org.
On Fri, Aug 29, 2014 at 2:03 PM, Archit Thakur
wrote:
> Hi,
>
> My requirement is to run Spark on Yarn without using the script
> spark-submit.
>
> I have a servlet and a tomcat server. As and when request comes, it
> creates a new SC and k
)
but the request is stuck indefinitely.
This works when I set
sparkConf.setMaster("yarn-client")
I am not sure, why is it not launching job in yarn-cluster mode.
Any thoughts?
Thanks and Regards,
Archit Thakur.