Spark Yarn Cluster with Reference File

2016-09-23 Thread ABHISHEK
pl.java:58) ... 19 more -- Cheers, Abhishek

Re: Spark Yarn Cluster with Reference File

2016-09-23 Thread ABHISHEK
wrong, please help to correct it. Aditya: I have attached code here for reference. --File option will distributed reference file to all node but Kie session is not able to pickup it. Thanks, Abhishek On Fri, Sep 23, 2016 at 2:25 PM, Steve Loughran wrote: > > On 23 Sep 2016, at 08:33, ABHI

Re: Spark Yarn Cluster with Reference File

2016-09-23 Thread ABHISHEK
I have tried with hdfs/tmp location but it didn't work. Same error. On 23 Sep 2016 19:37, "Aditya" wrote: > Hi Abhishek, > > Try below spark submit. > spark-submit --master yarn --deploy-mode cluster --files hdfs:// > abc.com:8020/tmp/abc.drl --class com.abc.Star

Restful WS for Spark

2016-09-30 Thread ABHISHEK
. Thanks, Abhishek

Is there a way to get column names using hiveContext ?

2014-12-07 Thread abhishek
Hi, I have iplRDD which is a json, and I do below steps and query through hivecontext. I get the results but without columns headers. Is there is a way to get the columns names ? val teamRDD = hiveContext.jsonRDD(iplRDD) teamRDD.registerTempTable("teams") hiveContext.cacheTable("teams") val res

Re: Need help for Spark-JobServer setup on Maven (for Java programming)

2014-12-30 Thread abhishek
Hey, why specific in maven?? we setup a spark job server thru sbt which is easy way to up and running job server. On 30 Dec 2014 13:32, "Sasi [via Apache Spark User List]" < ml-node+s1001560n20896...@n3.nabble.com> wrote: > > Does my question make sense or required some elaboration? > > Sasi > > _

Re: Need help for Spark-JobServer setup on Maven (for Java programming)

2014-12-30 Thread abhishek
Ohh... Just curious, we did similar use case like yours getting data out of Cassandra since job server is a rest architecture all we need is an URL to access it. Why integrating with your framework matters here when all we need is a URL. On 30 Dec 2014 14:05, "Sasi [via Apache Spark User List]" <

Re: Need help for Spark-JobServer setup on Maven (for Java programming)

2014-12-30 Thread abhishek
Frankly saying I never tried for this volume in practical. But I believe it should work. On 30 Dec 2014 15:26, "Sasi [via Apache Spark User List]" < ml-node+s1001560n20902...@n3.nabble.com> wrote: > Thanks Abhishek. We understand your point and will try using REST URL. > H

Re: Removing JARs from spark-jobserver

2015-01-10 Thread abhishek
There is path /tmp/spark-jobserver/file where all the jar are kept by default. probably deleting from there should work On 11 Jan 2015 12:51, "Sasi [via Apache Spark User List]" < ml-node+s1001560n21081...@n3.nabble.com> wrote: > How to remove submitted JARs from spark-jobserver? > > > >

Re: Removing JARs from spark-jobserver

2015-01-11 Thread abhishek
Nice! Good to know On 11 Jan 2015 21:10, "Sasi [via Apache Spark User List]" < ml-node+s1001560n21084...@n3.nabble.com> wrote: > Thank you Abhishek. That works. > > -- > If you reply to this email, your message will be added to the

Re: How to define SparkContext with Cassandra connection for spark-jobserver?

2015-01-15 Thread abhishek
In the spark job server* bin *folder, you will find* application.conf* file, put context-settings { spark.cassandra.connection.host = } Hope this should work -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-define-SparkContext-with-

Fails: Spark sbt/sbt publish local

2014-05-25 Thread ABHISHEK
Hi, I'm trying to install Spark along with Shark. Here's configuration details: Spark 0.9.1 Shark 0.9.1 Scala 2.10.3 Spark assembly was successful but running "sbt/sbt publish-local" failed. Please refer attached log for more details and advise. Thanks, Abhishek Sparkhome&

Re: Fails: Spark sbt/sbt publish local

2014-05-25 Thread ABHISHEK
14 at 8:46 AM, Aaron Davidson wrote: > I suppose you actually ran "publish-local" and not "publish local" like > your example showed. That being the case, could you show the compile error > that occurs? It could be related to the hadoop version. > > > On Sun

Trigger on GroupStateTimeout with no new data in group

2021-02-11 Thread Abhishek Gupta
Hi All, I had a question about modeling a user session kind of analytics use-case in Spark Structured Streaming. Is there a way to model something like this using Arbitrary stateful Spark streaming User session -> reads a few FAQS on a website and then decides to create a ticket or not FAQ Deflec

[Spark Core] saveAsTextFile is unable to rename a directory using hadoop-azure NativeAzureFileSystem

2021-09-13 Thread Abhishek Jindal
aceImpl.java:434) [error] at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2788) [error] ... 5 more I am currently using spark-core-3.1.1.jar with hadoop-azure-3.2.2.jar but this same issue also occurs in hadoop-azure-3.3.1.jar as well. Please advise how I should solve this issue. Thanks, Abhishek

Does Apache Spark 3 support GPU usage for Spark RDDs?

2021-09-21 Thread Abhishek Shakya
usage for RDD interfaces? PS: The question is posted in stackoverflow as well: Link <https://stackoverflow.com/questions/69273205/does-apache-spark-3-support-gpu-usage-for-spark-rdds> Regards, - Abhishek Shakya Senior Data Scientist 1, Contact: +919002319890 | Em

config: minOffsetsPerTrigger not working

2023-04-27 Thread Abhishek Singla
t:7077", "spark.app.name": "app", "spark.sql.streaming.kafka.useDeprecatedOffsetFetching": false, "spark.sql.streaming.metricsEnabled": true } But these configs do not seem to be working as I can see Spark processing batches of 3k-15k immediately one after another. Is there something I am missing? Ref: https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html Regards, Abhishek Singla

Re: config: minOffsetsPerTrigger not working

2023-04-27 Thread Abhishek Singla
:* Use it at your own risk. Any and all responsibility for any > loss, damage or destruction of data or any other property which may arise > from relying on this email's technical content is explicitly disclaimed. > The author will in no case be liable for any monetary damages arising f

Facing Error org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for s3ablock-0001-

2024-01-17 Thread Abhishek Singla
ionId, appConfig)) .option("checkpointLocation", appConfig.getChk().getPath()) .start() .awaitTermination(); Regards, Abhishek Singla

Re: Facing Error org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for s3ablock-0001-

2024-02-13 Thread Abhishek Singla
Hi Team, Could someone provide some insights into this issue? Regards, Abhishek Singla On Wed, Jan 17, 2024 at 11:45 PM Abhishek Singla < abhisheksingla...@gmail.com> wrote: > Hi Team, > > Version: 3.2.2 > Java Version: 1.8.0_211 > Scala Version: 2.12.15 > Cluster: S

Display a warning in EMR welcome screen

2024-05-11 Thread Abhishek Basu
allowing it to perform the real task. Thanks Abhishek Sent from Yahoo Mail for iPhone

Spark1.3.1 build issue with CDH5.4.0 getUnknownFields

2015-05-28 Thread Abhishek Tripathi
Hi , I'm using CDH5.4.0 quick start VM and tried to build Spark with Hive compatibility so that I can run Spark sql and access temp table remotely. I used below command to build Spark, it was build successful but when I tried to access Hive data from Spark sql, I get error. Thanks, Abhi --

Read/write metrics for jobs which use S3

2015-06-16 Thread Abhishek Modi
I mostly use Amazon S3 for reading input data and writing output data for my spark jobs. I want to know the numbers of bytes read & written by my job from S3. In hadoop, there are FileSystemCounters for this, is there something similar in spark ? If there is, can you please guide me on how to use

Output the data to external database at particular time in spark streaming

2016-03-08 Thread Abhishek Anand
I have a spark streaming job where I am aggregating the data by doing reduceByKeyAndWindow with inverse function. I am keeping the data in memory for upto 2 hours and In order to output the reduced data to an external storage I conditionally need to puke the data to DB say at every 15th minute of

Disk Full on one Worker is leading to Job Stuck and Executor Unresponsive

2016-03-31 Thread Abhishek Anand
Hi, Why is it so that when my disk space is full on one of the workers then the executor on that worker becomes unresponsive and the jobs on that worker fails with the exception 16/03/29 10:49:00 ERROR DiskBlockObjectWriter: Uncaught exception while reverting partial writes to file /data/spark-e

Re: Disk Full on one Worker is leading to Job Stuck and Executor Unresponsive

2016-03-31 Thread Abhishek Anand
Yu wrote: > Can you show the stack trace ? > > The log message came from > DiskBlockObjectWriter#revertPartialWritesAndClose(). > Unfortunately, the method doesn't throw exception, making it a bit hard > for caller to know of the disk full condition. > > On Thu, Mar 3

Re: Disk Full on one Worker is leading to Job Stuck and Executor Unresponsive

2016-04-01 Thread Abhishek Anand
(SingleThreadEventExecutor.java:116) ... 1 more Cheers !! Abhi On Fri, Apr 1, 2016 at 9:04 AM, Abhishek Anand wrote: > This is what I am getting in the executor logs > > 16/03/29 10:49:00 ERROR DiskBlockObjectWriter: Uncaught exception while > reverting partial writes to file &

Timeout in mapWithState

2016-04-04 Thread Abhishek Anand
What exactly is timeout in mapWithState ? I want the keys to get remmoved from the memory if there is no data received on that key for 10 minutes. How can I acheive this in mapWithState ? Regards, Abhi

Fwd: Facing Unusual Behavior with the executors in spark streaming

2016-04-05 Thread Abhishek Anand
Hi , Needed inputs for a couple of issue that I am facing in my production environment. I am using spark version 1.4.0 spark streaming. 1) It so happens that the worker is lost on a machine and the executor still shows up in the executor's tab in the UI. Even when I kill a worker using kill -9

RE: removing header from csv file

2016-04-26 Thread Mishra, Abhishek
You should be doing something like this: data = sc.textFile('file:///path1/path/test1.csv') header = data.first() #extract header #print header data = data.filter(lambda x:x !=header) #print data Hope it helps. Sincerely, Abhishek +91-7259028700 From: nihed mbarek [mailto:nihe...

Clear Threshold in Logistic Regression ML Pipeline

2016-05-03 Thread Abhishek Anand
Hi All, I am trying to build a logistic regression pipeline in ML. How can I clear the threshold which by default is 0.5. In mllib I am able to clear the threshold to get the raw predictions using model.clearThreshold() function. Regards, Abhi

Re: removing header from csv file

2016-05-03 Thread Abhishek Anand
You can use this function to remove the header from your dataset(applicable to RDD) def dropHeader(data: RDD[String]): RDD[String] = { data.mapPartitionsWithIndex((idx, lines) => { if (idx == 0) { lines.drop(1) } lines }) } Abhi On Wed, Apr 27, 2016 at 12:5

Calculating log-loss for the trained model in Spark ML

2016-05-03 Thread Abhishek Anand
I am building a ML pipeline for logistic regression. val lr = new LogisticRegression() lr.setMaxIter(100).setRegParam(0.001) val pipeline = new Pipeline().setStages(Array(geoDimEncoder,clientTypeEncoder, devTypeDimIdEncoder,pubClientIdEncoder,tmpltIdEncoder, hourEnc

Running glm in sparkR (data pre-processing step)

2016-05-30 Thread Abhishek Anand
Hi , I want to run glm variant of sparkR for my data that is there in a csv file. I see that the glm function in sparkR takes a spark dataframe as input. Now, when I read a file from csv and create a spark dataframe, how could I take care of the factor variables/columns in my data ? Do I need t

Re: Running glm in sparkR (data pre-processing step)

2016-05-30 Thread Abhishek Anand
type string) will be one-hot > encoded automatically. > So pre-processing like `as.factor` is not necessary, you can directly feed > your data to the model training. > > Thanks > Yanbo > > 2016-05-30 2:06 GMT-07:00 Abhishek Anand : > >> Hi , >> >> I want to ru

spark.hadoop.dfs.replication parameter not working for kafka-spark streaming

2016-05-31 Thread Abhishek Anand
My spark streaming checkpoint directory is being written to HDFS with default replication factor of 3. In my streaming application where I am listening from kafka and setting the dfs.replication = 2 as below the files are still being written with replication factor=3 SparkConf sparkConfig = new S

Re: spark.hadoop.dfs.replication parameter not working for kafka-spark streaming

2016-05-31 Thread Abhishek Anand
I also tried jsc.sparkContext().sc().hadoopConfiguration().set("dfs.replication", "2") But, still its not working. Any ideas why its not working ? Abhi On Tue, May 31, 2016 at 4:03 PM, Abhishek Anand wrote: > My spark streaming checkpoint directory is being written

Change spark dataframe to LabeledPoint in Java

2016-06-30 Thread Abhishek Anand
Hi , I have a dataframe which i want to convert to labeled point. DataFrame labeleddf = model.transform(newdf).select("label","features"); How can I convert this to a LabeledPoint to use in my Logistic Regression model. I could do this in scala using val trainData = labeleddf.map(row => Labeled

Concatenate the columns in dataframe to create new collumns using Java

2016-07-18 Thread Abhishek Anand
Hi, I have a dataframe say having C0,C1,C2 and so on as columns. I need to create interaction variables to be taken as input for my program. For eg - I need to create I1 as concatenation of C0,C3,C5 Similarly, I2 = concat(C4,C5) and so on .. How can I achieve this in my Java code for conca

Re: Concatenate the columns in dataframe to create new collumns using Java

2016-07-18 Thread Abhishek Anand
< columns.length; i++) { > selectColumns[i]=df.col(columns[i]); > } > > > selectColumns[columns.length]=functions.concat(df.col("firstname"), > df.col("lastname")); > > df.select(selectColumns).show(); > --

Re: Concatenate the columns in dataframe to create new collumns using Java

2016-07-18 Thread Abhishek Anand
catColumns.length; i++) { > concatColumns[i]=df.col(array[i]); > } > > return functions.concat(concatColumns).alias(fieldName); > } > > > > On Mon, Jul 18, 2016 at 2:14 PM, Abhishek Anand > wrote: > >> Hi Nihed, >> >>

Relative path in absolute URI

2016-08-02 Thread Abhishek Ranjan
up incorrect path. Did any one encountered similar problem with spark 2.0? With Thanks, Abhishek

UNSUBSCRIBE

2016-08-09 Thread abhishek singh

How to unpack the values of an item in a RDD so I can create a RDD with multiple items?

2015-12-13 Thread Abhishek Shivkumar
in line[1]]) but it throws an error saying "AttributeError: 'PipelinedRDD' object has no attribute 'flatmap" Can someone tell me the right way to unpack the values to different items in the new RDD? Thank you! With Regards, Abhishek S

Re: PairRDD(K, L) to multiple files by key serializing each value in L before

2015-12-16 Thread Abhishek Shivkumar
for ele in line[1]: 4. Write every ele into the file created. 5. Close the file. Do you think this works? Thanks Abhishek S Thank you! With Regards, Abhishek S On Wed, Dec 16, 2015 at 1:05 AM, Daniel Valdivia wrote: > Hello everyone, > > I have a PairRDD with a set o

Error on using updateStateByKey

2015-12-18 Thread Abhishek Anand
I am trying to use updateStateByKey but receiving the following error. (Spark Version 1.4.0) Can someone please point out what might be the possible reason for this error. *The method updateStateByKey(Function2,Optional,Optional>) in the type JavaPairDStream is not applicable for the arguments *

[Spark-SQL] Custom aggregate function for GrouppedData

2016-01-05 Thread Abhishek Gayakwad
t(Collection collection) { return collection.stream().map(Object::toString).collect(Collectors.joining(",")); } } Please suggest if there is a better way of doing this. Regards, Abhishek

Re: [Spark-SQL] Custom aggregate function for GrouppedData

2016-01-07 Thread Abhishek Gayakwad
ala/index.html#org.apache.spark.sql.GroupedDataset> > has > mapGroups, which sounds like what you are looking for. You can also write > a custom Aggregator > <https://docs.cloud.databricks.com/docs/spark/1.6/index.html#examples/Dataset%20Aggregator.html> > > On Tue, Jan 5, 20

Getting kafka offsets at beginning of spark streaming application

2016-01-11 Thread Abhishek Anand
Hi, Is there a way so that I can fetch the offsets from where the spark streaming starts reading from Kafka when my application starts ? What I am trying is to create an initial RDD with offsest at a particular time passed as input from the command line and the offsets from where my spark streami

Worker's BlockManager Folder not getting cleared

2016-01-24 Thread Abhishek Anand
Hi All, How long the shuffle files and data files are stored on the block manager folder of the workers. I have a spark streaming job with window duration of 2 hours and slide interval of 15 minutes. When I execute the following command in my block manager path find . -type f -cmin +150 -name "

Re: Worker's BlockManager Folder not getting cleared

2016-01-26 Thread Abhishek Anand
sues.apache.org/jira/browse/SPARK-10975 > With spark >= 1.6: > https://issues.apache.org/jira/browse/SPARK-12430 > and also be aware of: > https://issues.apache.org/jira/browse/SPARK-12583 > > > On 25/01/2016 07:14, Abhishek Anand wrote: > > Hi All, > > How long the s

Repartition taking place for all previous windows even after checkpointing

2016-01-28 Thread Abhishek Anand
Hi All, Can someone help me with the following doubts regarding checkpointing : My code flow is something like follows -> 1) create direct stream from kafka 2) repartition kafka stream 3) mapToPair followed by reduceByKey 4) filter 5) reduceByKeyAndWindow without the inverse function 6) writ

Re: Repartition taking place for all previous windows even after checkpointing

2016-02-01 Thread Abhishek Anand
Any insights on this ? On Fri, Jan 29, 2016 at 1:08 PM, Abhishek Anand wrote: > Hi All, > > Can someone help me with the following doubts regarding checkpointing : > > My code flow is something like follows -> > > 1) create direct stream from kafka > 2) repartition k

Stateful Operation on JavaPairDStream Help Needed !!

2016-02-11 Thread Abhishek Anand
Hi All, I have an use case like follows in my production environment where I am listening from kafka with slideInterval of 1 min and windowLength of 2 hours. I have a JavaPairDStream where for each key I am getting the same key but with different value,which might appear in the same batch or some

Re: Stateful Operation on JavaPairDStream Help Needed !!

2016-02-13 Thread Abhishek Anand
there. Is there any other work around ? Cheers!! Abhi On Fri, Feb 12, 2016 at 3:33 AM, Sebastian Piu wrote: > Looks like mapWithState could help you? > On 11 Feb 2016 8:40 p.m., "Abhishek Anand" > wrote: > >> Hi All, >> >> I have an use case like follows

Re: Worker's BlockManager Folder not getting cleared

2016-02-13 Thread Abhishek Anand
Hi All, Any ideas on this one ? The size of this directory keeps on growing. I can see there are many files from a day earlier too. Cheers !! Abhi On Tue, Jan 26, 2016 at 7:13 PM, Abhishek Anand wrote: > Hi Adrian, > > I am running spark in standalone mode. > > The spark ve

Re: Stateful Operation on JavaPairDStream Help Needed !!

2016-02-15 Thread Abhishek Anand
ince release of 1.6.0 >> e.g. >> SPARK-12591 NullPointerException using checkpointed mapWithState with >> KryoSerializer >> >> which is in the upcoming 1.6.1 >> >> Cheers >> >> On Sat, Feb 13, 2016 at 12:05 PM, Abhishek Anand > > wrote: >>

Saving Kafka Offsets to Cassandra at begining of each batch in Spark Streaming

2016-02-15 Thread Abhishek Anand
I have a kafka rdd and I need to save the offsets to cassandra table at the begining of each batch. Basically I need to write the offsets of the type Offsets below that I am getting inside foreachRD, to cassandra. The javafunctions api to write to cassandra needs a rdd. How can I create a rdd from

Abnormally large deserialisation time for some tasks

2016-02-16 Thread Abhishek Modi
I'm doing a mapPartitions on a rdd cached in memory followed by a reduce. Here is my code snippet // myRdd is an rdd consisting of Tuple2[Int,Long] myRdd.mapPartitions(rangify).reduce( (x,y) => (x._1+y._1,x._2 ++ y._2)) //The rangify function def rangify(l: Iterator[ Tuple2[Int,Long] ]) : Iterato

Unusually large deserialisation time

2016-02-16 Thread Abhishek Modi
using 20 executors with 1 core for each executor. The cached rdd has 60 blocks. The problem is for every 2-3 runs of the job, there is a task which has an abnormally large deserialisation time. Screenshot attached Thank you, Abhishek -

Re: Re: Unusually large deserialisation time

2016-02-16 Thread Abhishek Modi
Darren: this is not the last task of the stage. Thank you, Abhishek e: abshkm...@gmail.com p: 91-8233540996 On Tue, Feb 16, 2016 at 6:52 PM, Darren Govoni wrote: > There were some posts in this group about it. Another person also saw the > deadlock on next to last or last stag

Re: Re: Unusually large deserialisation time

2016-02-16 Thread Abhishek Modi
PS - I don't get this behaviour in all the cases. I did many runs of the same job & i get this behaviour in around 40% of the cases. Task 4 is the bottom row in the metrics table Thank you, Abhishek e: abshkm...@gmail.com p: 91-8233540996 On Tue, Feb 16, 2016 at 11:19 PM, Abhishek Mo

Re: Saving Kafka Offsets to Cassandra at begining of each batch in Spark Streaming

2016-02-16 Thread Abhishek Anand
forward to just use the normal cassandra client > to save them from the driver. > > On Tue, Feb 16, 2016 at 1:15 AM, Abhishek Anand > wrote: > >> I have a kafka rdd and I need to save the offsets to cassandra table at >> the begining of each batch. >> >> Basi

Re: Worker's BlockManager Folder not getting cleared

2016-02-17 Thread Abhishek Anand
Looking for answer to this. Is it safe to delete the older files using find . -type f -cmin +200 -name "shuffle*" -exec rm -rf {} \; For a window duration of 2 hours how older files can we delete ? Thanks. On Sun, Feb 14, 2016 at 12:34 PM, Abhishek Anand wrote: > Hi All, >

Spark Streaming with Kafka Use Case

2016-02-17 Thread Abhishek Anand
I have a spark streaming application running in production. I am trying to find a solution for a particular use case when my application has a downtime of say 5 hours and is restarted. Now, when I start my streaming application after 5 hours there would be considerable amount of data then in the Ka

Sample project on Image Processing

2016-02-22 Thread Mishra, Abhishek
Hello, I am working on image processing samples. Was wondering if anyone has worked on Image processing project in spark. Please let me know if any sample project or example is available. Please guide in this. Sincerely, Abhishek

Re: Stateful Operation on JavaPairDStream Help Needed !!

2016-02-22 Thread Abhishek Anand
Any Insights on this one ? Thanks !!! Abhi On Mon, Feb 15, 2016 at 11:08 PM, Abhishek Anand wrote: > I am now trying to use mapWithState in the following way using some > example codes. But, by looking at the DAG it does not seem to checkpoint > the state and when restarting the ap

java.io.IOException: java.lang.reflect.InvocationTargetException on new spark machines

2016-02-22 Thread Abhishek Anand
Hi , I am getting the following exception on running my spark streaming job. The same job has been running fine since long and when I added two new machines to my cluster I see the job failing with the following exception. 16/02/22 19:23:01 ERROR Executor: Exception in task 2.0 in stage 4229.0

Re: Stateful Operation on JavaPairDStream Help Needed !!

2016-02-22 Thread Abhishek Anand
at 1:25 AM, Shixiong(Ryan) Zhu wrote: > Hey Abhi, > > Could you post how you use mapWithState? By default, it should do > checkpointing every 10 batches. > However, there is a known issue that prevents mapWithState from > checkpointing in some special cases: > https://issues.ap

RE: Sample project on Image Processing

2016-02-22 Thread Mishra, Abhishek
Thank you Everyone. I am to work on PoC with 2 types of images, that basically will be two PoC’s. Face recognition and Map data processing. I am looking to these links and hopefully will get an idea. Thanks again. Will post the queries as and when I get doubts. Sincerely, Abhishek From: ndj

Query Kafka Partitions from Spark SQL

2016-02-23 Thread Abhishek Anand
Is there a way to query the json (or any other format) data stored in kafka using spark sql by providing the offset range on each of the brokers ? I just want to be able to query all the partitions in a sq manner. Thanks ! Abhi

value from groubBy paired rdd

2016-02-23 Thread Mishra, Abhishek
alue grouped=pairs.groupByKey()#grouping values as per key grouped_val= grouped.map(lambda x : (list(x[1]))).collect() print grouped_val Thanks in Advance, Sincerely, Abhishek

LDA topic Modeling spark + python

2016-02-24 Thread Mishra, Abhishek
ed status. The topic length being 2000 and value of k or number of words being 3. Please, if you can provide me with some link or some code base on spark with python ; I would be grateful. Looking forward for a reply, Sincerely, Abhishek

RE: LDA topic Modeling spark + python

2016-02-24 Thread Mishra, Abhishek
Hello All, If someone has any leads on this please help me. Sincerely, Abhishek From: Mishra, Abhishek Sent: Wednesday, February 24, 2016 5:11 PM To: user@spark.apache.org Subject: LDA topic Modeling spark + python Hello All, I am doing a LDA model, please guide me with something. I

Re: java.io.IOException: java.lang.reflect.InvocationTargetException on new spark machines

2016-02-25 Thread Abhishek Anand
On changing the default compression codec which is snappy to lzf the errors are gone !! How can I fix this using snappy as the codec ? Is there any downside of using lzf as snappy is the default codec that ships with spark. Thanks !!! Abhi On Mon, Feb 22, 2016 at 7:42 PM, Abhishek Anand

Re: java.io.IOException: java.lang.reflect.InvocationTargetException on new spark machines

2016-02-26 Thread Abhishek Anand
Any insights on this ? On Fri, Feb 26, 2016 at 1:21 PM, Abhishek Anand wrote: > On changing the default compression codec which is snappy to lzf the > errors are gone !! > > How can I fix this using snappy as the codec ? > > Is there any downside of using lzf as snappy is the

Re: Stateful Operation on JavaPairDStream Help Needed !!

2016-02-27 Thread Abhishek Anand
stateDStream.stateSnapshots(); > > > On Mon, Feb 22, 2016 at 12:25 PM, Abhishek Anand > wrote: > >> Hi Ryan, >> >> Reposting the code. >> >> Basically my use case is something like - I am receiving the web >> impression logs and may get the noti

Re: java.io.IOException: java.lang.reflect.InvocationTargetException on new spark machines

2016-02-29 Thread Abhishek Anand
our new machines? > > On Fri, Feb 26, 2016 at 11:05 PM, Abhishek Anand > wrote: > >> Any insights on this ? >> >> On Fri, Feb 26, 2016 at 1:21 PM, Abhishek Anand >> wrote: >> >>> On changing the default compression codec which is snappy to lzf the &g

Re: Stateful Operation on JavaPairDStream Help Needed !!

2016-02-29 Thread Abhishek Anand
wrote: > Sorry that I forgot to tell you that you should also call `rdd.count()` > for "reduceByKey" as well. Could you try it and see if it works? > > On Sat, Feb 27, 2016 at 1:17 PM, Abhishek Anand > wrote: > >> Hi Ryan, >> >> I am using mapWithState

External Table not getting updated from parquet files written by spark streaming

2015-11-19 Thread Abhishek Anand
Hi , I am using spark streaming to write the aggregated output as parquet files to the hdfs using SaveMode.Append. I have an external table created like : CREATE TABLE if not exists rolluptable USING org.apache.spark.sql.parquet OPTIONS ( path "hdfs:" ); I had an impression that in case o

Getting the batch time of the active batches in spark streaming

2015-11-24 Thread Abhishek Anand
Hi , I need to get the batch time of the active batches which appears on the UI of spark streaming tab, How can this be achieved in Java ? BR, Abhi

Unable to use "Batch Start Time" on worker nodes.

2015-11-26 Thread Abhishek Anand
Hi , I need to use batch start time in my spark streaming job. I need the value of batch start time inside one of the functions that is called within a flatmap function in java. Please suggest me how this can be done. I tried to use the StreamingListener class and set the value of a variable in

Re: Unable to use "Batch Start Time" on worker nodes.

2015-11-30 Thread Abhishek Anand
orm that allows you specify a function with two params - the > parent RDD and the batch time at which the RDD was generated. > > TD > > On Thu, Nov 26, 2015 at 1:33 PM, Abhishek Anand > wrote: > >> Hi , >> >> I need to use batch start time in my spark streaming job

Is it possible to pass additional parameters to a python function when used inside RDD.filter method?

2015-12-04 Thread Abhishek Shivkumar
separate parameter to my_func, besides the item that goes into it. How can I do that? I know my_item will refer to one item that comes from my_rdd and how can I pass my own parameter (let's say my_param) as an additional parameter to my_func? Thanks Abhishek S -- *NOTICE AND DISCLAIMER*

Re: Is it possible to pass additional parameters to a python function when used inside RDD.filter method?

2015-12-04 Thread Abhishek Shivkumar
Excellent. that did work - thanks. On 4 December 2015 at 12:35, Praveen Chundi wrote: > Passing a lambda function should work. > > my_rrd.filter(lambda x: myfunc(x,newparam)) > > Best regards, > Praveen Chundi > > > On 04.12.2015 13:19, Abhishek Shivkumar wrote: &g

How to access a RDD (that has been broadcasted) inside the filter method of another RDD?

2015-12-04 Thread Abhishek Shivkumar
Hi, I have RDD1 that is broadcasted. I have a user defined method for the filter functionality of RDD2, written as follows: RDD2.filter(my_func) I want to access the values of RDD1 inside my_func. Is that possible? Should I pass RDD1 as a parameter into my_func? Thanks Abhishek S

Spark Interview Questions

2015-07-28 Thread Mishra, Abhishek
Hello, Please help me with links or some document for Apache Spark interview questions and answers. Also for the tools related to it ,for which questions could be asked. Thanking you all. Sincerely, Abhishek - To unsubscribe

RE: Spark Interview Questions

2015-07-29 Thread Mishra, Abhishek
Hello Vaquar, I have working knowledge and experience in Spark. I just wanted to test or do a mock round to evaluate myself. Thank you for the reply, Please share something if you have for the same. Sincerely, Abhishek From: vaquar khan [mailto:vaquar.k...@gmail.com] Sent: Wednesday, July 29

MongoDB and Spark

2015-09-11 Thread Mishra, Abhishek
Hello , Is there any way to query multiple collections from mongodb using spark and java. And i want to create only one Configuration Object. Please help if anyone has something regarding this. Thank You Abhishek

RE: MongoDB and Spark

2015-09-11 Thread Mishra, Abhishek
Anything using Spark RDD’s ??? Abhishek From: Sandeep Giri [mailto:sand...@knowbigdata.com] Sent: Friday, September 11, 2015 3:19 PM To: Mishra, Abhishek; user@spark.apache.org; d...@spark.apache.org Subject: Re: MongoDB and Spark use map-reduce. On Fri, Sep 11, 2015, 14:32 Mishra, Abhishek

RE: MongoDB and Spark

2015-09-11 Thread Mishra, Abhishek
. Abhishek From: Corey Nolet [mailto:cjno...@gmail.com] Sent: Friday, September 11, 2015 7:58 PM To: Sandeep Giri Cc: Mishra, Abhishek; user@spark.apache.org; d...@spark.apache.org Subject: Re: MongoDB and Spark Unfortunately, MongoDB does not directly expose its locality via its client API so the

Finding unique across all columns in dataset

2016-09-19 Thread Abhishek Anand
I have an rdd which contains 14 different columns. I need to find the distinct across all the columns of rdd and write it to hdfs. How can I acheive this ? Is there any distributed data structure that I can use and keep on updating it as I traverse the new rows ? Regards, Abhi

Re: Finding unique across all columns in dataset

2016-09-19 Thread Abhishek Anand
n use distinct over you data frame or rdd >> >> rdd.distinct >> >> It will give you distinct across your row. >> >> On Mon, Sep 19, 2016 at 2:35 PM, Abhishek Anand >> wrote: >> >>> I have an rdd which contains 14 different columns. I need to

MapWithState with large state

2016-10-31 Thread Abhishek Singh
Can it handle state that is large than what memory will hold?

Insert a JavaPairDStream into multiple cassandra table on the basis of key.

2016-11-02 Thread Abhishek Anand
Hi All, I have a JavaPairDStream. I want to insert this dstream into multiple cassandra tables on the basis of key. One approach is to filter each key and then insert it into cassandra table. But this would call filter operation "n" times depending on the number of keys. Is there any better appro

Re: Could not parse Master URL for Mesos on Spark 2.1.0

2017-01-09 Thread Abhishek Bhandari
t; machine configuration). >> >> I really don't understand why this is happening since the same >> configuration but using a Spark 2.0.0 is running fine within Vagrant. >> Could someone please help? >> >> thanks in advance, >> Richard >> >

Ingesting data in parallel across workers in Data Frame

2017-01-20 Thread Abhishek Gupta
ata-sources/sql-databases.html>The problem I am facing that I don't have a numeric column which can be used for achieving the partition. Any help would be appreciated. Thank You --Abhishek

[Spark Structured Streaming on K8S]: Debug - File handles/descriptor (unix pipe) leaking

2018-07-19 Thread Abhishek Tripathi
1 (both topic has 20 partition and getting almost 5k records/s ) Hadoop version (Using hdfs for check pointing) : 2.7.2 Thank you for any help. Best Regards, *Abhishek Tripathi*

Re: [Spark Structured Streaming on K8S]: Debug - File handles/descriptor (unix pipe) leaking

2018-07-23 Thread Abhishek Tripathi
/6f838adf6651491bd4f263956f403c74 Thanks. Best Regards, *Abhishek Tripath* On Thu, Jul 19, 2018 at 10:02 AM Abhishek Tripathi wrote: > Hello All!​​ > I am using spark 2.3.1 on kubernetes to run a structured streaming spark > job which read stream from Kafka , perform some window aggregation and > output s

New Spark Datasource for Hive ACID tables

2019-07-26 Thread Abhishek Somani
e tables via Spark as well. The datasource is also available as a spark package, and instructions on how to use it are available on the Github page <https://github.com/qubole/spark-acid>. We welcome your feedback and suggestions. Thanks, Abhishek Somani

  1   2   >