Question 2: You might be creating a dataframe while reading a parquet file.
df = spark.read.load(“file.parquet”) df.select(rtrim(“columnName”)); Regards Prathmesh Ranaut https://linkedin.com/in/prathmeshranaut > On Jul 12, 2019, at 9:15 AM, anbutech <anbutec...@outlook.com> wrote: > > Hello All, Could you please help me to fix the below questions > > Question 1: > > I have tried the below options while writing the final data in a csv file to > ignore double quotes in the same csv file .nothing is worked. I'm using > spark version 2.2 and scala version 2.11 . > > option("quote", "\"") > > .option("escape", ":") > > .option("escape", "") > > .option("quote", "\u0000") > > Code: > > finaldataset > > .repartitions(numberofpartitions) > > .mode(Savemode.overwrite) > > .option("delimiter","|") > > .option("header","true") > > .csv("path") > > output_data.csv > > field|field2|""|field4|field5|""|field6|""|field7 > > I want to remove double quotes in the csv file while writing spark method.is > there any options available? > > Question 2: Is there any way to remove the trailing white spaces in the > fields while reading the parquet file. > > Thanks Anbu > > > > -- > Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/ > > --------------------------------------------------------------------- > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org