Adding to that
Surprisingly it is giving correct result if i use derived tables rather
than original tables;
changes query :
*select *from ( select * from T1 ) aLEFT OUTER JOIN ( select * from T2) bON
a.t1c2 = b.t2c1;*
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Mon, Jan 11, 201
Hive,
I am trying out the Hive on Spark with hive 1.2.1 and spark 1.5.2. Could
someone help me on this? Thanks!
Following are my steps:
1. build spark 1.5.2 without Hive and Hive Thrift Server. At this point, I can
use it to submit application using spark-submit --master yarn-client
2. And t
> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException:
>java.lang.ClassCastException:
>org.apache.hadoop.hive.serde2.lazy.LazyString cannot be cast to
>org.apache.hadoop.io.Text
...
>at
>org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObje
>ctInspe
> javax.net.ssl.SSLHandshakeException:
>sun.security.validator.ValidatorException: PKIX path building failed:
>sun.security.provider.certpath.SunCertPathBuilderException: unable to
>find valid certification path to requested
There's a linux package named ca-certificates(-java) which might be
miss
Thanks Sergey for looking into this.
Below is the Exception we are getting when we use from Hive UDF, but from
separate java program it works fine
javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathB
take a look at this
https://github.com/dvasilen/Hive-XML-SerDe/wiki/XML-data-sources
On Mon, Jan 11, 2016 at 9:30 AM, nitinpathakala .
wrote:
> Hello,
>
> Any ideas on this .
>
> Thanks,
> Nitin
>
> On Thu, Jan 7, 2016 at 6:06 PM, nitinpathakala . > wrote:
>
>> Hello,
>>
>> We have a requiremen
Any help on this ?
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Sat, Jan 9, 2016 at 3:42 PM, @Sanjiv Singh
wrote:
> Hi All,
>
> I am facing strange behaviour as explained below. I have tow hive table
> T1 and T2 , joined with LEFT OUTER JOIN ..I am getting strange value for
> two columns
Hi,
Could someone help on this question?
I have a parquet file, I need to first figure out its schema before I create
table and run query against it,. I know Spark SQL can do this, but I would ask
whether Hive supports this in some way,
Thanks!
在 2016-01-09 11:19:34,"Todd" 写道:
Hi,
I wou
Hello,
Any ideas on this .
Thanks,
Nitin
On Thu, Jan 7, 2016 at 6:06 PM, nitinpathakala .
wrote:
> Hello,
>
> We have a requirement to load data from xml file to Hive tables.
> The xml tags woud be the columns and values will be the data for those
> columns.
> Any pointers will be really help
Apologies a correction
The phrase below:
… It does not really make sense to enforce the same logic in Hive which is
built on schema on read. Once you start adding foreign keys constraints to
Hive, you are actually making it “schema on read”.
It should read
It does not really make
Hi,
Primary key and foreign key constraints are really more applicable to
transactional databases where the transaction logic needs to be enforced. That
is once a unit of work is completed it is either committed or rolled back.
It does not really make sense to enforce the same logic in Hi
You can join on any equality criterion, just like in any other relational
database. Foreign keys in "standard" relational databases are primarily an
integrity constraint. Hive in general lacks integrity constraints.
On Sun, Jan 10, 2016 at 9:45 AM, Ashok Kumar wrote:
> hi,
>
> what is the equiva
hi,
what is the equivalent to foreign keys in Hive?
Thanks
Hi,
I'm trying to break a row into two rows based on two different columns by
using the following query:
SELECT mystack.alias1
FROM cdrtable
LATERAL VIEW stack(2, caller_IMEI, recipient_IMEI) mystack AS alias1;
The exception I'm hitting is:
java.io.IOException: org.apache.hadoop.hive.ql.metad
14 matches
Mail list logo