Hi Mayur, 

Thanks for your help. 
I'm not sure I understand what are the parameters i must give to 
newAPIHadoopFile [ K , V , F <: InputFormat [ K , V ] ] ( path: String , 
fClass: Class [ F ] , kClass: Class [ K ] , vClass: Class [ V ] , conf: 
Configuration ) : JavaPairRDD [ K , V ] 

It seems it returns a JavaPairRDD but i currently use sc.textFile and it 
returns just lines from the files. 
I'm not sure... how is this going to work ? What does the fClass stand for ? 
Why don't i receive just a set of lines ? 
Also, were you thinking of a specific configuration option to ignore 0 files ? 

Thanks 
Regards, 
Laurent T 


----- Mail original -----

De: "Mayur Rustagi" <mayur.rust...@gmail.com> 
À: "laurent thoulon" <laurent.thou...@ldmobile.net> 
Envoyé: Mercredi 21 Mai 2014 13:51:46 
Objet: Re: Ignoring S3 0 files exception 


You can try newhaoopapi in spark context. Should be able to configure the 
loader to ignore 0 files. 




Regards 
Mayur 



Mayur Rustagi 
Ph: +1 (760) 203 3257 
http://www.sigmoidanalytics.com 

@mayur_rustagi 




On Wed, May 21, 2014 at 3:36 PM, Laurent T < laurent.thou...@ldmobile.net > 
wrote: 


Noone has any idea ? It's really troublesome, it seems like i have no way to 
catch errors while an action is beeing processed and just ignore it. Here's a 
bit more details on what i'm doing: JavaRDD a = 
sc.textFile("s3n://"+missingFilenamePattern) 
JavaRDD b = sc.textFile("s3n://"+existingFilenamePattern) 

JavaRDD aPlusB = a.union(b);

aPlusB.reduceByLey(MyReducer); // <-- This throws the error I'd like to ignore 
the exception caused by a to process b without troubles. Thanks 

View this message in context: Re: Ignoring S3 0 files exception 


Sent from the Apache Spark User List mailing list archive at Nabble.com. 




Reply via email to