You can use DESCRIBE FORMATTED <DATABASE>.<TABLE_NAME> to get that info.
This is based on the same command in Hive however, it throws two erroneous
error lines as shown below (don't see them in Hive DESCRIBE ...)
Example
scala> sql("describe formatted test.t14").collect.foreach(println)
16/03/25 22:32:38 ERROR Hive: Table test not found: test.test table not
found
16/03/25 22:32:38 ERROR Hive: Table test not found: test.test table not
found
[# col_name data_type comment ]
[ ]
[invoicenumber int ]
[paymentdate date ]
[net decimal(20,2) ]
[vat decimal(20,2) ]
[total decimal(20,2) ]
[ ]
[# Detailed Table Information ]
[Database: test ]
[Owner: hduser ]
[
*CreateTime: Fri Mar 25 22:13:44 GMT 2016
]*[LastAccessTime:
UNKNOWN ]
[Protect Mode: None ]
[Retention: 0 ]
[Location:
hdfs://rhes564:9000/user/hive/warehouse/test.db/t14 ]
[Table Type: MANAGED_TABLE ]
[Table Parameters: ]
[ COLUMN_STATS_ACCURATE {\"BASIC_STATS\":\"true\"}]
[ comment from csv file from excel sheet]
[ numFiles 2 ]
[ orc.compress ZLIB ]
[ totalSize 1090 ]
[ transient_lastDdlTime 1458944025 ]
[ ]
[# Storage Information ]
[SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde ]
[InputFormat: org.apache.hadoop.hive.ql.io.orc.OrcInputFormat ]
[OutputFormat:
org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat ]
[Compressed: No ]
[Num Buckets: -1 ]
[Bucket Columns: [] ]
[Sort Columns: [] ]
[Storage Desc Params: ]
[ serialization.format 1 ]
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpress.com
On 25 March 2016 at 22:12, Ashok Kumar <[email protected]> wrote:
> Experts,
>
> I would like to know when a table was created in Hive database using Spark
> shell?
>
> Thanks
>