Your code looks overly complicated and the relevant parts are missing. If
possible please post the complete snippet including the retrieval/type if
rows so we get the complete picture and can try to help.

For first simplification you can just convert aMap to Seq[(String, (String,
String))] and further map it to flatten the nested tuple into a Seq which
you then pass to toDF via var arg expansion.

Val colNames: Seq[String] = aMap.toSeq.map(kv => Seq(kv._1, kv._2._1,
kv._2._2))

Depending on the type of aMap this leads to problems as we assume it to be
Map[String, (String, String)].

Best Regards

Vikas Garg <sperry...@gmail.com> schrieb am Fr. 18. Dez. 2020 um 15:46:

> I am getting the table schema through Map which I have converted to Seq
> and passing to toDF
>
> On Fri, 18 Dec 2020 at 20:13, Sean Owen <sro...@gmail.com> wrote:
>
>> It's not really a Spark question. .toDF() takes column names.
>> atrb.head.toSeq.map(_.toString)? but it's not clear what you mean the col
>> names to be
>>
>> On Fri, Dec 18, 2020 at 8:37 AM Vikas Garg <sperry...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Can someone please help me how to convert Seq[Any] to Seq[String]
>>>
>>> For line
>>> val df = row.toSeq.toDF(newCol.toSeq: _*)
>>> I get that error message.
>>>
>>> I converted Map "val aMap = Map("admit" -> ("description","comments"))"
>>> to Seq
>>>
>>> var atrb = ListBuffer[(String,String,String)]()
>>>
>>> for((key,value) <- aMap){
>>>   atrb += ((key, value._1, value._2))
>>> }
>>>
>>> var newCol = atrb.head.productIterator.toList.toSeq
>>>
>>> Please someone help me on this.
>>>
>>> Thanks
>>>
>>> --
Roland Johann
Data Architect/Data Engineer

phenetic GmbH
Lütticher Straße 10, 50674 Köln, Germany

Mobil: +49 172 365 26 46
Mail: roland.joh...@phenetic.io
Web: phenetic.io

Handelsregister: Amtsgericht Köln (HRB 92595)
Geschäftsführer: Roland Johann, Uwe Reimann

Reply via email to