Yes many thanks German. Jayesh kindly reminded me about it.

It is amazing how one at times one overlooks these typos and assumes more
sophisticated investigation to the code not working.

Mich



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 4 Jan 2021 at 19:49, German Schiavon <gschiavonsp...@gmail.com>
wrote:

> Hi,
>
> I think you have a typo :
>
> root
>
>  |-- ceated: string (nullable = true)
>
>
> and then:
>
>     withColumnRenamed("created","Date Calculated").
>
>
> On Mon, 4 Jan 2021 at 19:12, Lalwani, Jayesh <jlalw...@amazon.com.invalid>
> wrote:
>
>> You don’t have a column named “created”. The column name is “ceated”,
>> without the “r”
>>
>>
>>
>> *From: *Mich Talebzadeh <mich.talebza...@gmail.com>
>> *Date: *Monday, January 4, 2021 at 1:06 PM
>> *To: *"user @spark" <user@spark.apache.org>
>> *Subject: *[EXTERNAL] Spark DF does not rename the column
>>
>>
>>
>> *CAUTION*: This email originated from outside of the organization. Do
>> not click links or open attachments unless you can confirm the sender and
>> know the content is safe.
>>
>>
>>
>> Hi,
>>
>>
>>
>>  version 2.4.3
>>
>>
>>
>> I don't know the cause of this.
>>
>>
>>
>> This renaming of DF columns used to work fine. I did couple of changes to
>> spark/Scala code not relevant to this table and it refuses to rename the
>> columns for a table!.
>>
>>
>>
>> val summaryACC = HiveContext.table("summaryACC")
>>
>>
>>
>> summaryACC.printSchema()
>>
>>
>>
>> root
>>
>>  |-- ceated: string (nullable = true)
>>
>>  |-- hashtag: string (nullable = true)
>>
>>  |-- paid: float (nullable = true)
>>
>>  |-- received: float (nullable = true)
>>
>>
>>
>> summaryACC.
>>
>>     orderBy(desc("paid"),desc("received")).
>>
>>     withColumnRenamed("created","Date Calculated").
>>
>>     withColumnRenamed("hashtag","Who").
>>
>>     withColumn(("received"),format_number(col("received"),2)).
>>
>>     withColumn(("paid"),format_number(col("paid"),2)).
>>
>>     withColumnRenamed("paid","paid out/GBP").
>>
>>     withColumnRenamed("received","paid in/GBP").
>>
>>     withColumn("paid in/GBP",when(col("paid in/GBP") ===
>> "0.00","--").otherwise(col("paid in/GBP"))).
>>
>>     withColumn("paid out/GBP",when(col("paid out/GBP") ===
>> "0.00","--").otherwise(col("paid out/GBP"))).
>>
>>     select("Date Calculated","Who","paid in/GBP","paid
>> out/GBP").show(1000,false)
>>
>>
>>
>> and this is the error
>>
>>
>>
>> org.apache.spark.sql.AnalysisException: cannot resolve '`Date
>> Calculated`' given input columns: [alayer.summaryacc.ceated, Who, paid
>> out/GBP, paid in/GBP];;
>>
>>
>>
>> This used to work before!
>>
>>
>>
>> +----------------------------+------------------+-----------+------------+
>>
>> |Date Calculated             |Who               |paid in/GBP|paid out/GBP|
>>
>> +----------------------------+------------------+-----------+------------+
>>
>> |Mon Jan 04 14:22:17 GMT 2021|paypal            |579.98     |1,526.86    |
>>
>>
>>
>> Appreciate any ideas.
>>
>>
>>
>> Thanks, Mich
>>
>>
>>
>

Reply via email to